00:00:00.000 Started by upstream project "autotest-per-patch" build number 127130 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.118 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.118 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.149 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.158 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.169 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.169 > git config core.sparsecheckout # timeout=10 00:00:06.178 > git read-tree -mu HEAD # timeout=10 00:00:06.194 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.222 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.222 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.311 [Pipeline] Start of Pipeline 00:00:06.325 [Pipeline] library 00:00:06.326 Loading library shm_lib@master 00:00:06.326 Library shm_lib@master is cached. Copying from home. 00:00:06.342 [Pipeline] node 00:00:06.355 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.357 [Pipeline] { 00:00:06.366 [Pipeline] catchError 00:00:06.367 [Pipeline] { 00:00:06.379 [Pipeline] wrap 00:00:06.389 [Pipeline] { 00:00:06.396 [Pipeline] stage 00:00:06.397 [Pipeline] { (Prologue) 00:00:06.572 [Pipeline] sh 00:00:06.853 + logger -p user.info -t JENKINS-CI 00:00:06.868 [Pipeline] echo 00:00:06.869 Node: WFP21 00:00:06.875 [Pipeline] sh 00:00:07.172 [Pipeline] setCustomBuildProperty 00:00:07.180 [Pipeline] echo 00:00:07.180 Cleanup processes 00:00:07.184 [Pipeline] sh 00:00:07.462 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.462 2382704 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.473 [Pipeline] sh 00:00:07.751 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.751 ++ grep -v 'sudo pgrep' 00:00:07.751 ++ awk '{print $1}' 00:00:07.751 + sudo kill -9 00:00:07.751 + true 00:00:07.766 [Pipeline] cleanWs 00:00:07.775 [WS-CLEANUP] Deleting project workspace... 00:00:07.775 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.781 [WS-CLEANUP] done 00:00:07.785 [Pipeline] setCustomBuildProperty 00:00:07.798 [Pipeline] sh 00:00:08.082 + sudo git config --global --replace-all safe.directory '*' 00:00:08.184 [Pipeline] httpRequest 00:00:08.219 [Pipeline] echo 00:00:08.220 Sorcerer 10.211.164.101 is alive 00:00:08.229 [Pipeline] httpRequest 00:00:08.234 HttpMethod: GET 00:00:08.235 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.235 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.255 Response Code: HTTP/1.1 200 OK 00:00:08.255 Success: Status code 200 is in the accepted range: 200,404 00:00:08.256 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.209 [Pipeline] sh 00:00:10.496 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.513 [Pipeline] httpRequest 00:00:10.541 [Pipeline] echo 00:00:10.543 Sorcerer 10.211.164.101 is alive 00:00:10.552 [Pipeline] httpRequest 00:00:10.557 HttpMethod: GET 00:00:10.558 URL: http://10.211.164.101/packages/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:00:10.558 Sending request to url: http://10.211.164.101/packages/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:00:10.579 Response Code: HTTP/1.1 200 OK 00:00:10.580 Success: Status code 200 is in the accepted range: 200,404 00:00:10.580 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:02:11.811 [Pipeline] sh 00:02:12.102 + tar --no-same-owner -xf spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:02:14.655 [Pipeline] sh 00:02:14.941 + git -C spdk log --oneline -n5 00:02:14.942 e5ef9abc9 test/scheduler: Add a system level test for the scheduler_set_option RPC 00:02:14.942 223450b47 lib/event: Add support for core isolation in scheduling 00:02:14.942 6a0934c18 lib/event: Modify spdk_reactor_set_interrupt_mode() to be called from scheduling reactor 00:02:14.942 d005e023b raid: fix empty slot not updated in sb after resize 00:02:14.942 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:14.955 [Pipeline] } 00:02:14.971 [Pipeline] // stage 00:02:14.980 [Pipeline] stage 00:02:14.982 [Pipeline] { (Prepare) 00:02:15.001 [Pipeline] writeFile 00:02:15.018 [Pipeline] sh 00:02:15.302 + logger -p user.info -t JENKINS-CI 00:02:15.316 [Pipeline] sh 00:02:15.600 + logger -p user.info -t JENKINS-CI 00:02:15.614 [Pipeline] sh 00:02:15.899 + cat autorun-spdk.conf 00:02:15.899 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.899 SPDK_TEST_NVMF=1 00:02:15.899 SPDK_TEST_NVME_CLI=1 00:02:15.899 SPDK_TEST_NVMF_NICS=mlx5 00:02:15.899 SPDK_RUN_UBSAN=1 00:02:15.899 NET_TYPE=phy 00:02:15.907 RUN_NIGHTLY=0 00:02:15.911 [Pipeline] readFile 00:02:15.936 [Pipeline] withEnv 00:02:15.938 [Pipeline] { 00:02:15.951 [Pipeline] sh 00:02:16.235 + set -ex 00:02:16.235 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:16.235 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:16.235 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.235 ++ SPDK_TEST_NVMF=1 00:02:16.235 ++ SPDK_TEST_NVME_CLI=1 00:02:16.235 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:16.235 ++ SPDK_RUN_UBSAN=1 00:02:16.235 ++ NET_TYPE=phy 00:02:16.235 ++ RUN_NIGHTLY=0 00:02:16.235 + case $SPDK_TEST_NVMF_NICS in 00:02:16.235 + DRIVERS=mlx5_ib 00:02:16.235 + [[ -n mlx5_ib ]] 00:02:16.235 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:16.235 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:22.805 rmmod: ERROR: Module irdma is not currently loaded 00:02:22.805 rmmod: ERROR: Module i40iw is not currently loaded 00:02:22.805 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:22.805 + true 00:02:22.805 + for D in $DRIVERS 00:02:22.805 + sudo modprobe mlx5_ib 00:02:22.805 + exit 0 00:02:22.815 [Pipeline] } 00:02:22.833 [Pipeline] // withEnv 00:02:22.838 [Pipeline] } 00:02:22.854 [Pipeline] // stage 00:02:22.863 [Pipeline] catchError 00:02:22.865 [Pipeline] { 00:02:22.880 [Pipeline] timeout 00:02:22.881 Timeout set to expire in 1 hr 0 min 00:02:22.882 [Pipeline] { 00:02:22.897 [Pipeline] stage 00:02:22.900 [Pipeline] { (Tests) 00:02:22.915 [Pipeline] sh 00:02:23.199 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:23.199 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:23.199 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:23.199 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:23.200 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:23.200 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:23.200 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:23.200 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:23.200 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:23.200 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:23.200 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:23.200 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:23.200 + source /etc/os-release 00:02:23.200 ++ NAME='Fedora Linux' 00:02:23.200 ++ VERSION='38 (Cloud Edition)' 00:02:23.200 ++ ID=fedora 00:02:23.200 ++ VERSION_ID=38 00:02:23.200 ++ VERSION_CODENAME= 00:02:23.200 ++ PLATFORM_ID=platform:f38 00:02:23.200 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:23.200 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.200 ++ LOGO=fedora-logo-icon 00:02:23.200 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:23.200 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.200 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:23.200 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.200 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.200 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.200 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:23.200 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.200 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:23.200 ++ SUPPORT_END=2024-05-14 00:02:23.200 ++ VARIANT='Cloud Edition' 00:02:23.200 ++ VARIANT_ID=cloud 00:02:23.200 + uname -a 00:02:23.200 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:23.200 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:27.395 Hugepages 00:02:27.395 node hugesize free / total 00:02:27.395 node0 1048576kB 0 / 0 00:02:27.395 node0 2048kB 0 / 0 00:02:27.395 node1 1048576kB 0 / 0 00:02:27.395 node1 2048kB 0 / 0 00:02:27.395 00:02:27.395 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.395 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:27.395 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:27.395 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:27.395 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:27.395 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:27.395 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:27.396 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:27.396 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:27.396 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:27.396 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:27.396 + rm -f /tmp/spdk-ld-path 00:02:27.396 + source autorun-spdk.conf 00:02:27.396 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.396 ++ SPDK_TEST_NVMF=1 00:02:27.396 ++ SPDK_TEST_NVME_CLI=1 00:02:27.396 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:27.396 ++ SPDK_RUN_UBSAN=1 00:02:27.396 ++ NET_TYPE=phy 00:02:27.396 ++ RUN_NIGHTLY=0 00:02:27.396 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.396 + [[ -n '' ]] 00:02:27.396 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.396 + for M in /var/spdk/build-*-manifest.txt 00:02:27.396 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.396 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:27.396 + for M in /var/spdk/build-*-manifest.txt 00:02:27.396 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.396 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:27.396 ++ uname 00:02:27.396 + [[ Linux == \L\i\n\u\x ]] 00:02:27.396 + sudo dmesg -T 00:02:27.396 + sudo dmesg --clear 00:02:27.396 + dmesg_pid=2384350 00:02:27.396 + [[ Fedora Linux == FreeBSD ]] 00:02:27.396 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.396 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.396 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.396 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:27.396 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:27.396 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.396 + sudo dmesg -Tw 00:02:27.396 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.396 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.396 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.396 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.396 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.396 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.396 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.396 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.396 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.396 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.396 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:27.396 Test configuration: 00:02:27.396 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.396 SPDK_TEST_NVMF=1 00:02:27.396 SPDK_TEST_NVME_CLI=1 00:02:27.396 SPDK_TEST_NVMF_NICS=mlx5 00:02:27.396 SPDK_RUN_UBSAN=1 00:02:27.396 NET_TYPE=phy 00:02:27.396 RUN_NIGHTLY=0 07:07:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:27.396 07:07:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.396 07:07:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.396 07:07:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.396 07:07:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.396 07:07:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.396 07:07:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.396 07:07:59 -- paths/export.sh@5 -- $ export PATH 00:02:27.396 07:07:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.396 07:07:59 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:27.396 07:07:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:27.396 07:07:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721884079.XXXXXX 00:02:27.396 07:07:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721884079.6xM6q3 00:02:27.396 07:07:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:27.396 07:07:59 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:27.396 07:07:59 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:27.396 07:07:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:27.396 07:07:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:27.396 07:07:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:27.396 07:07:59 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:27.396 07:07:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.396 07:07:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:27.396 07:07:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:27.396 07:07:59 -- pm/common@17 -- $ local monitor 00:02:27.396 07:07:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.396 07:07:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.396 07:07:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.396 07:07:59 -- pm/common@21 -- $ date +%s 00:02:27.396 07:07:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.396 07:07:59 -- pm/common@21 -- $ date +%s 00:02:27.396 07:07:59 -- pm/common@21 -- $ date +%s 00:02:27.396 07:07:59 -- pm/common@25 -- $ sleep 1 00:02:27.396 07:07:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884079 00:02:27.396 07:07:59 -- pm/common@21 -- $ date +%s 00:02:27.396 07:07:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884079 00:02:27.396 07:07:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884079 00:02:27.396 07:07:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884079 00:02:27.396 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884079_collect-cpu-load.pm.log 00:02:27.396 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884079_collect-vmstat.pm.log 00:02:27.396 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884079_collect-cpu-temp.pm.log 00:02:27.396 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884079_collect-bmc-pm.bmc.pm.log 00:02:28.333 07:08:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:28.333 07:08:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.333 07:08:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.333 07:08:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.333 07:08:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.333 Thu Jul 25 05:08:00 AM UTC 2024 00:02:28.333 07:08:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.333 v24.09-pre-321-ge5ef9abc9 00:02:28.333 07:08:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:28.333 07:08:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.333 07:08:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.333 07:08:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:28.333 07:08:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:28.333 07:08:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.333 ************************************ 00:02:28.333 START TEST ubsan 00:02:28.333 ************************************ 00:02:28.333 07:08:00 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:28.333 using ubsan 00:02:28.333 00:02:28.333 real 0m0.001s 00:02:28.333 user 0m0.000s 00:02:28.333 sys 0m0.000s 00:02:28.333 07:08:00 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:28.333 07:08:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.333 ************************************ 00:02:28.333 END TEST ubsan 00:02:28.333 ************************************ 00:02:28.333 07:08:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:28.333 07:08:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:28.333 07:08:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:28.333 07:08:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:28.333 07:08:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:28.334 07:08:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:28.334 07:08:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:28.334 07:08:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:28.334 07:08:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:28.592 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:28.592 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:28.851 Using 'verbs' RDMA provider 00:02:42.003 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:56.885 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:56.885 Creating mk/config.mk...done. 00:02:56.885 Creating mk/cc.flags.mk...done. 00:02:56.885 Type 'make' to build. 00:02:56.885 07:08:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:56.885 07:08:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:56.885 07:08:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:56.885 07:08:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.885 ************************************ 00:02:56.885 START TEST make 00:02:56.885 ************************************ 00:02:56.885 07:08:27 make -- common/autotest_common.sh@1125 -- $ make -j112 00:02:56.885 make[1]: Nothing to be done for 'all'. 00:03:03.457 The Meson build system 00:03:03.457 Version: 1.3.1 00:03:03.458 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:03.458 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:03.458 Build type: native build 00:03:03.458 Program cat found: YES (/usr/bin/cat) 00:03:03.458 Project name: DPDK 00:03:03.458 Project version: 24.03.0 00:03:03.458 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:03.458 C linker for the host machine: cc ld.bfd 2.39-16 00:03:03.458 Host machine cpu family: x86_64 00:03:03.458 Host machine cpu: x86_64 00:03:03.458 Message: ## Building in Developer Mode ## 00:03:03.458 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:03.458 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:03.458 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:03.458 Program python3 found: YES (/usr/bin/python3) 00:03:03.458 Program cat found: YES (/usr/bin/cat) 00:03:03.458 Compiler for C supports arguments -march=native: YES 00:03:03.458 Checking for size of "void *" : 8 00:03:03.458 Checking for size of "void *" : 8 (cached) 00:03:03.458 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:03.458 Library m found: YES 00:03:03.458 Library numa found: YES 00:03:03.458 Has header "numaif.h" : YES 00:03:03.458 Library fdt found: NO 00:03:03.458 Library execinfo found: NO 00:03:03.458 Has header "execinfo.h" : YES 00:03:03.458 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:03.458 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:03.458 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:03.458 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:03.458 Run-time dependency openssl found: YES 3.0.9 00:03:03.458 Run-time dependency libpcap found: YES 1.10.4 00:03:03.458 Has header "pcap.h" with dependency libpcap: YES 00:03:03.458 Compiler for C supports arguments -Wcast-qual: YES 00:03:03.458 Compiler for C supports arguments -Wdeprecated: YES 00:03:03.458 Compiler for C supports arguments -Wformat: YES 00:03:03.458 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:03.458 Compiler for C supports arguments -Wformat-security: NO 00:03:03.458 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.458 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:03.458 Compiler for C supports arguments -Wnested-externs: YES 00:03:03.458 Compiler for C supports arguments -Wold-style-definition: YES 00:03:03.458 Compiler for C supports arguments -Wpointer-arith: YES 00:03:03.458 Compiler for C supports arguments -Wsign-compare: YES 00:03:03.458 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:03.458 Compiler for C supports arguments -Wundef: YES 00:03:03.458 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.458 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:03.458 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:03.458 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.458 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:03.458 Program objdump found: YES (/usr/bin/objdump) 00:03:03.458 Compiler for C supports arguments -mavx512f: YES 00:03:03.458 Checking if "AVX512 checking" compiles: YES 00:03:03.458 Fetching value of define "__SSE4_2__" : 1 00:03:03.458 Fetching value of define "__AES__" : 1 00:03:03.458 Fetching value of define "__AVX__" : 1 00:03:03.458 Fetching value of define "__AVX2__" : 1 00:03:03.458 Fetching value of define "__AVX512BW__" : 1 00:03:03.458 Fetching value of define "__AVX512CD__" : 1 00:03:03.458 Fetching value of define "__AVX512DQ__" : 1 00:03:03.458 Fetching value of define "__AVX512F__" : 1 00:03:03.458 Fetching value of define "__AVX512VL__" : 1 00:03:03.458 Fetching value of define "__PCLMUL__" : 1 00:03:03.458 Fetching value of define "__RDRND__" : 1 00:03:03.458 Fetching value of define "__RDSEED__" : 1 00:03:03.458 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:03.458 Fetching value of define "__znver1__" : (undefined) 00:03:03.458 Fetching value of define "__znver2__" : (undefined) 00:03:03.458 Fetching value of define "__znver3__" : (undefined) 00:03:03.458 Fetching value of define "__znver4__" : (undefined) 00:03:03.458 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:03.458 Message: lib/log: Defining dependency "log" 00:03:03.458 Message: lib/kvargs: Defining dependency "kvargs" 00:03:03.458 Message: lib/telemetry: Defining dependency "telemetry" 00:03:03.458 Checking for function "getentropy" : NO 00:03:03.458 Message: lib/eal: Defining dependency "eal" 00:03:03.458 Message: lib/ring: Defining dependency "ring" 00:03:03.458 Message: lib/rcu: Defining dependency "rcu" 00:03:03.458 Message: lib/mempool: Defining dependency "mempool" 00:03:03.458 Message: lib/mbuf: Defining dependency "mbuf" 00:03:03.458 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:03.458 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.458 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.458 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.458 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.458 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:03.458 Compiler for C supports arguments -mpclmul: YES 00:03:03.458 Compiler for C supports arguments -maes: YES 00:03:03.458 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.458 Compiler for C supports arguments -mavx512bw: YES 00:03:03.458 Compiler for C supports arguments -mavx512dq: YES 00:03:03.458 Compiler for C supports arguments -mavx512vl: YES 00:03:03.458 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:03.458 Compiler for C supports arguments -mavx2: YES 00:03:03.458 Compiler for C supports arguments -mavx: YES 00:03:03.458 Message: lib/net: Defining dependency "net" 00:03:03.458 Message: lib/meter: Defining dependency "meter" 00:03:03.458 Message: lib/ethdev: Defining dependency "ethdev" 00:03:03.458 Message: lib/pci: Defining dependency "pci" 00:03:03.458 Message: lib/cmdline: Defining dependency "cmdline" 00:03:03.458 Message: lib/hash: Defining dependency "hash" 00:03:03.458 Message: lib/timer: Defining dependency "timer" 00:03:03.458 Message: lib/compressdev: Defining dependency "compressdev" 00:03:03.458 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:03.458 Message: lib/dmadev: Defining dependency "dmadev" 00:03:03.458 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:03.458 Message: lib/power: Defining dependency "power" 00:03:03.458 Message: lib/reorder: Defining dependency "reorder" 00:03:03.458 Message: lib/security: Defining dependency "security" 00:03:03.458 Has header "linux/userfaultfd.h" : YES 00:03:03.458 Has header "linux/vduse.h" : YES 00:03:03.458 Message: lib/vhost: Defining dependency "vhost" 00:03:03.458 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:03.458 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:03.458 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:03.458 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:03.458 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:03.458 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:03.458 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:03.458 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:03.458 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:03.458 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:03.458 Program doxygen found: YES (/usr/bin/doxygen) 00:03:03.458 Configuring doxy-api-html.conf using configuration 00:03:03.458 Configuring doxy-api-man.conf using configuration 00:03:03.458 Program mandb found: YES (/usr/bin/mandb) 00:03:03.458 Program sphinx-build found: NO 00:03:03.458 Configuring rte_build_config.h using configuration 00:03:03.458 Message: 00:03:03.458 ================= 00:03:03.458 Applications Enabled 00:03:03.458 ================= 00:03:03.458 00:03:03.458 apps: 00:03:03.458 00:03:03.458 00:03:03.458 Message: 00:03:03.458 ================= 00:03:03.458 Libraries Enabled 00:03:03.458 ================= 00:03:03.458 00:03:03.458 libs: 00:03:03.458 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:03.458 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:03.458 cryptodev, dmadev, power, reorder, security, vhost, 00:03:03.458 00:03:03.458 Message: 00:03:03.458 =============== 00:03:03.458 Drivers Enabled 00:03:03.458 =============== 00:03:03.458 00:03:03.458 common: 00:03:03.458 00:03:03.458 bus: 00:03:03.458 pci, vdev, 00:03:03.458 mempool: 00:03:03.458 ring, 00:03:03.458 dma: 00:03:03.458 00:03:03.458 net: 00:03:03.458 00:03:03.458 crypto: 00:03:03.458 00:03:03.458 compress: 00:03:03.458 00:03:03.458 vdpa: 00:03:03.458 00:03:03.458 00:03:03.458 Message: 00:03:03.458 ================= 00:03:03.458 Content Skipped 00:03:03.458 ================= 00:03:03.458 00:03:03.458 apps: 00:03:03.458 dumpcap: explicitly disabled via build config 00:03:03.458 graph: explicitly disabled via build config 00:03:03.458 pdump: explicitly disabled via build config 00:03:03.458 proc-info: explicitly disabled via build config 00:03:03.458 test-acl: explicitly disabled via build config 00:03:03.458 test-bbdev: explicitly disabled via build config 00:03:03.458 test-cmdline: explicitly disabled via build config 00:03:03.458 test-compress-perf: explicitly disabled via build config 00:03:03.458 test-crypto-perf: explicitly disabled via build config 00:03:03.458 test-dma-perf: explicitly disabled via build config 00:03:03.458 test-eventdev: explicitly disabled via build config 00:03:03.458 test-fib: explicitly disabled via build config 00:03:03.458 test-flow-perf: explicitly disabled via build config 00:03:03.458 test-gpudev: explicitly disabled via build config 00:03:03.458 test-mldev: explicitly disabled via build config 00:03:03.458 test-pipeline: explicitly disabled via build config 00:03:03.458 test-pmd: explicitly disabled via build config 00:03:03.459 test-regex: explicitly disabled via build config 00:03:03.459 test-sad: explicitly disabled via build config 00:03:03.459 test-security-perf: explicitly disabled via build config 00:03:03.459 00:03:03.459 libs: 00:03:03.459 argparse: explicitly disabled via build config 00:03:03.459 metrics: explicitly disabled via build config 00:03:03.459 acl: explicitly disabled via build config 00:03:03.459 bbdev: explicitly disabled via build config 00:03:03.459 bitratestats: explicitly disabled via build config 00:03:03.459 bpf: explicitly disabled via build config 00:03:03.459 cfgfile: explicitly disabled via build config 00:03:03.459 distributor: explicitly disabled via build config 00:03:03.459 efd: explicitly disabled via build config 00:03:03.459 eventdev: explicitly disabled via build config 00:03:03.459 dispatcher: explicitly disabled via build config 00:03:03.459 gpudev: explicitly disabled via build config 00:03:03.459 gro: explicitly disabled via build config 00:03:03.459 gso: explicitly disabled via build config 00:03:03.459 ip_frag: explicitly disabled via build config 00:03:03.459 jobstats: explicitly disabled via build config 00:03:03.459 latencystats: explicitly disabled via build config 00:03:03.459 lpm: explicitly disabled via build config 00:03:03.459 member: explicitly disabled via build config 00:03:03.459 pcapng: explicitly disabled via build config 00:03:03.459 rawdev: explicitly disabled via build config 00:03:03.459 regexdev: explicitly disabled via build config 00:03:03.459 mldev: explicitly disabled via build config 00:03:03.459 rib: explicitly disabled via build config 00:03:03.459 sched: explicitly disabled via build config 00:03:03.459 stack: explicitly disabled via build config 00:03:03.459 ipsec: explicitly disabled via build config 00:03:03.459 pdcp: explicitly disabled via build config 00:03:03.459 fib: explicitly disabled via build config 00:03:03.459 port: explicitly disabled via build config 00:03:03.459 pdump: explicitly disabled via build config 00:03:03.459 table: explicitly disabled via build config 00:03:03.459 pipeline: explicitly disabled via build config 00:03:03.459 graph: explicitly disabled via build config 00:03:03.459 node: explicitly disabled via build config 00:03:03.459 00:03:03.459 drivers: 00:03:03.459 common/cpt: not in enabled drivers build config 00:03:03.459 common/dpaax: not in enabled drivers build config 00:03:03.459 common/iavf: not in enabled drivers build config 00:03:03.459 common/idpf: not in enabled drivers build config 00:03:03.459 common/ionic: not in enabled drivers build config 00:03:03.459 common/mvep: not in enabled drivers build config 00:03:03.459 common/octeontx: not in enabled drivers build config 00:03:03.459 bus/auxiliary: not in enabled drivers build config 00:03:03.459 bus/cdx: not in enabled drivers build config 00:03:03.459 bus/dpaa: not in enabled drivers build config 00:03:03.459 bus/fslmc: not in enabled drivers build config 00:03:03.459 bus/ifpga: not in enabled drivers build config 00:03:03.459 bus/platform: not in enabled drivers build config 00:03:03.459 bus/uacce: not in enabled drivers build config 00:03:03.459 bus/vmbus: not in enabled drivers build config 00:03:03.459 common/cnxk: not in enabled drivers build config 00:03:03.459 common/mlx5: not in enabled drivers build config 00:03:03.459 common/nfp: not in enabled drivers build config 00:03:03.459 common/nitrox: not in enabled drivers build config 00:03:03.459 common/qat: not in enabled drivers build config 00:03:03.459 common/sfc_efx: not in enabled drivers build config 00:03:03.459 mempool/bucket: not in enabled drivers build config 00:03:03.459 mempool/cnxk: not in enabled drivers build config 00:03:03.459 mempool/dpaa: not in enabled drivers build config 00:03:03.459 mempool/dpaa2: not in enabled drivers build config 00:03:03.459 mempool/octeontx: not in enabled drivers build config 00:03:03.459 mempool/stack: not in enabled drivers build config 00:03:03.459 dma/cnxk: not in enabled drivers build config 00:03:03.459 dma/dpaa: not in enabled drivers build config 00:03:03.459 dma/dpaa2: not in enabled drivers build config 00:03:03.459 dma/hisilicon: not in enabled drivers build config 00:03:03.459 dma/idxd: not in enabled drivers build config 00:03:03.459 dma/ioat: not in enabled drivers build config 00:03:03.459 dma/skeleton: not in enabled drivers build config 00:03:03.459 net/af_packet: not in enabled drivers build config 00:03:03.459 net/af_xdp: not in enabled drivers build config 00:03:03.459 net/ark: not in enabled drivers build config 00:03:03.459 net/atlantic: not in enabled drivers build config 00:03:03.459 net/avp: not in enabled drivers build config 00:03:03.459 net/axgbe: not in enabled drivers build config 00:03:03.459 net/bnx2x: not in enabled drivers build config 00:03:03.459 net/bnxt: not in enabled drivers build config 00:03:03.459 net/bonding: not in enabled drivers build config 00:03:03.459 net/cnxk: not in enabled drivers build config 00:03:03.459 net/cpfl: not in enabled drivers build config 00:03:03.459 net/cxgbe: not in enabled drivers build config 00:03:03.459 net/dpaa: not in enabled drivers build config 00:03:03.459 net/dpaa2: not in enabled drivers build config 00:03:03.459 net/e1000: not in enabled drivers build config 00:03:03.459 net/ena: not in enabled drivers build config 00:03:03.459 net/enetc: not in enabled drivers build config 00:03:03.459 net/enetfec: not in enabled drivers build config 00:03:03.459 net/enic: not in enabled drivers build config 00:03:03.459 net/failsafe: not in enabled drivers build config 00:03:03.459 net/fm10k: not in enabled drivers build config 00:03:03.459 net/gve: not in enabled drivers build config 00:03:03.459 net/hinic: not in enabled drivers build config 00:03:03.459 net/hns3: not in enabled drivers build config 00:03:03.459 net/i40e: not in enabled drivers build config 00:03:03.459 net/iavf: not in enabled drivers build config 00:03:03.459 net/ice: not in enabled drivers build config 00:03:03.459 net/idpf: not in enabled drivers build config 00:03:03.459 net/igc: not in enabled drivers build config 00:03:03.459 net/ionic: not in enabled drivers build config 00:03:03.459 net/ipn3ke: not in enabled drivers build config 00:03:03.459 net/ixgbe: not in enabled drivers build config 00:03:03.459 net/mana: not in enabled drivers build config 00:03:03.459 net/memif: not in enabled drivers build config 00:03:03.459 net/mlx4: not in enabled drivers build config 00:03:03.459 net/mlx5: not in enabled drivers build config 00:03:03.459 net/mvneta: not in enabled drivers build config 00:03:03.459 net/mvpp2: not in enabled drivers build config 00:03:03.459 net/netvsc: not in enabled drivers build config 00:03:03.459 net/nfb: not in enabled drivers build config 00:03:03.459 net/nfp: not in enabled drivers build config 00:03:03.459 net/ngbe: not in enabled drivers build config 00:03:03.459 net/null: not in enabled drivers build config 00:03:03.459 net/octeontx: not in enabled drivers build config 00:03:03.459 net/octeon_ep: not in enabled drivers build config 00:03:03.459 net/pcap: not in enabled drivers build config 00:03:03.459 net/pfe: not in enabled drivers build config 00:03:03.459 net/qede: not in enabled drivers build config 00:03:03.459 net/ring: not in enabled drivers build config 00:03:03.459 net/sfc: not in enabled drivers build config 00:03:03.459 net/softnic: not in enabled drivers build config 00:03:03.459 net/tap: not in enabled drivers build config 00:03:03.459 net/thunderx: not in enabled drivers build config 00:03:03.459 net/txgbe: not in enabled drivers build config 00:03:03.459 net/vdev_netvsc: not in enabled drivers build config 00:03:03.459 net/vhost: not in enabled drivers build config 00:03:03.459 net/virtio: not in enabled drivers build config 00:03:03.459 net/vmxnet3: not in enabled drivers build config 00:03:03.459 raw/*: missing internal dependency, "rawdev" 00:03:03.459 crypto/armv8: not in enabled drivers build config 00:03:03.459 crypto/bcmfs: not in enabled drivers build config 00:03:03.459 crypto/caam_jr: not in enabled drivers build config 00:03:03.459 crypto/ccp: not in enabled drivers build config 00:03:03.459 crypto/cnxk: not in enabled drivers build config 00:03:03.459 crypto/dpaa_sec: not in enabled drivers build config 00:03:03.459 crypto/dpaa2_sec: not in enabled drivers build config 00:03:03.459 crypto/ipsec_mb: not in enabled drivers build config 00:03:03.459 crypto/mlx5: not in enabled drivers build config 00:03:03.459 crypto/mvsam: not in enabled drivers build config 00:03:03.459 crypto/nitrox: not in enabled drivers build config 00:03:03.459 crypto/null: not in enabled drivers build config 00:03:03.459 crypto/octeontx: not in enabled drivers build config 00:03:03.459 crypto/openssl: not in enabled drivers build config 00:03:03.459 crypto/scheduler: not in enabled drivers build config 00:03:03.459 crypto/uadk: not in enabled drivers build config 00:03:03.459 crypto/virtio: not in enabled drivers build config 00:03:03.459 compress/isal: not in enabled drivers build config 00:03:03.459 compress/mlx5: not in enabled drivers build config 00:03:03.459 compress/nitrox: not in enabled drivers build config 00:03:03.459 compress/octeontx: not in enabled drivers build config 00:03:03.459 compress/zlib: not in enabled drivers build config 00:03:03.459 regex/*: missing internal dependency, "regexdev" 00:03:03.459 ml/*: missing internal dependency, "mldev" 00:03:03.459 vdpa/ifc: not in enabled drivers build config 00:03:03.459 vdpa/mlx5: not in enabled drivers build config 00:03:03.459 vdpa/nfp: not in enabled drivers build config 00:03:03.459 vdpa/sfc: not in enabled drivers build config 00:03:03.459 event/*: missing internal dependency, "eventdev" 00:03:03.459 baseband/*: missing internal dependency, "bbdev" 00:03:03.459 gpu/*: missing internal dependency, "gpudev" 00:03:03.459 00:03:03.459 00:03:03.459 Build targets in project: 85 00:03:03.459 00:03:03.459 DPDK 24.03.0 00:03:03.459 00:03:03.459 User defined options 00:03:03.459 buildtype : debug 00:03:03.459 default_library : shared 00:03:03.459 libdir : lib 00:03:03.459 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:03.459 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:03.459 c_link_args : 00:03:03.459 cpu_instruction_set: native 00:03:03.460 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:03.460 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:03.460 enable_docs : false 00:03:03.460 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:03.460 enable_kmods : false 00:03:03.460 max_lcores : 128 00:03:03.460 tests : false 00:03:03.460 00:03:03.460 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.460 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:03.727 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:03.727 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.727 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.727 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.727 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.727 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.727 [7/268] Linking static target lib/librte_kvargs.a 00:03:03.727 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.727 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.727 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.727 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.727 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.727 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.727 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.727 [15/268] Linking static target lib/librte_log.a 00:03:03.727 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.727 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.727 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.727 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.986 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.986 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.986 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.986 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.986 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.986 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.986 [26/268] Linking static target lib/librte_pci.a 00:03:03.986 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.986 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.986 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.986 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.986 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.986 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.986 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.986 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.245 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.245 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.245 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.245 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:04.245 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.245 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.245 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:04.245 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.245 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.245 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.245 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.245 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.245 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.245 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.245 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.245 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.245 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.245 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.245 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.245 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.245 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.245 [56/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.245 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.245 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.245 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.245 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.245 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.245 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.245 [63/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.245 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:04.245 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.245 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.245 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.245 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.245 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.245 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.245 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:04.245 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.245 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.245 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.245 [75/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.245 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.245 [77/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:04.245 [78/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.245 [79/268] Linking static target lib/librte_telemetry.a 00:03:04.245 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.245 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.245 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:04.245 [83/268] Linking static target lib/librte_meter.a 00:03:04.245 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.245 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.245 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.245 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.245 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.245 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.245 [90/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:04.245 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.245 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:04.245 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.245 [94/268] Linking static target lib/librte_ring.a 00:03:04.245 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.245 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.246 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:04.246 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:04.246 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:04.246 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.246 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.246 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.246 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.246 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.246 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.504 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.504 [107/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.504 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.504 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.504 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:04.504 [111/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.504 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.504 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:04.504 [114/268] Linking static target lib/librte_timer.a 00:03:04.504 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.504 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.504 [117/268] Linking static target lib/librte_cmdline.a 00:03:04.504 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.504 [119/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:04.504 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:04.504 [121/268] Linking static target lib/librte_mempool.a 00:03:04.504 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.504 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.504 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.504 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:04.504 [126/268] Linking static target lib/librte_rcu.a 00:03:04.504 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.504 [128/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.504 [129/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.504 [130/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.504 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:04.504 [132/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.504 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.504 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.504 [135/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.504 [136/268] Linking static target lib/librte_eal.a 00:03:04.504 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.504 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:04.504 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.504 [140/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.504 [141/268] Linking static target lib/librte_net.a 00:03:04.504 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.504 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.504 [144/268] Linking static target lib/librte_compressdev.a 00:03:04.504 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.504 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.504 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.504 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.504 [149/268] Linking static target lib/librte_dmadev.a 00:03:04.504 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.504 [151/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.504 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.504 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.504 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.505 [155/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.505 [156/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.505 [157/268] Linking static target lib/librte_mbuf.a 00:03:04.505 [158/268] Linking target lib/librte_log.so.24.1 00:03:04.505 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.505 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.763 [161/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.763 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.763 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.763 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.763 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.763 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.763 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.763 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.763 [169/268] Linking static target lib/librte_power.a 00:03:04.763 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.763 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.763 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.763 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.763 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.763 [175/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:04.763 [176/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.763 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.763 [178/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:04.763 [179/268] Linking static target lib/librte_security.a 00:03:04.763 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.763 [181/268] Linking static target lib/librte_reorder.a 00:03:04.763 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.763 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.763 [184/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.763 [185/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.763 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.763 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.763 [188/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.763 [189/268] Linking static target lib/librte_hash.a 00:03:04.763 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.763 [191/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.763 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.763 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.763 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.763 [195/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.022 [196/268] Linking static target lib/librte_cryptodev.a 00:03:05.022 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.022 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.022 [199/268] Linking target lib/librte_telemetry.so.24.1 00:03:05.022 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.022 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.022 [202/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.022 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.022 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.022 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.022 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.022 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.022 [208/268] Linking static target drivers/librte_bus_pci.a 00:03:05.022 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.022 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.022 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.022 [212/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.022 [213/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.281 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.281 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.281 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.281 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.281 [218/268] Linking static target lib/librte_ethdev.a 00:03:05.281 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.281 [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.539 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.539 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.539 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.797 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.797 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.797 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.797 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.362 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.363 [229/268] Linking static target lib/librte_vhost.a 00:03:07.297 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.674 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.272 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.647 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.905 [234/268] Linking target lib/librte_eal.so.24.1 00:03:16.905 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:16.905 [236/268] Linking target lib/librte_meter.so.24.1 00:03:16.905 [237/268] Linking target lib/librte_ring.so.24.1 00:03:16.905 [238/268] Linking target lib/librte_timer.so.24.1 00:03:16.905 [239/268] Linking target lib/librte_pci.so.24.1 00:03:16.905 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:16.905 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:17.164 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:17.164 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:17.164 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:17.164 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:17.164 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:17.164 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:17.164 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:17.164 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:17.423 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:17.423 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:17.423 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:17.423 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:17.423 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:17.423 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:17.682 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:17.682 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:17.682 [258/268] Linking target lib/librte_net.so.24.1 00:03:17.682 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:17.682 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:17.682 [261/268] Linking target lib/librte_security.so.24.1 00:03:17.682 [262/268] Linking target lib/librte_hash.so.24.1 00:03:17.682 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:17.682 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:17.940 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:17.940 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:17.941 [267/268] Linking target lib/librte_power.so.24.1 00:03:17.941 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:17.941 INFO: autodetecting backend as ninja 00:03:17.941 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:03:19.317 CC lib/ut_mock/mock.o 00:03:19.317 CC lib/log/log.o 00:03:19.317 CC lib/log/log_deprecated.o 00:03:19.317 CC lib/log/log_flags.o 00:03:19.317 CC lib/ut/ut.o 00:03:19.317 LIB libspdk_ut_mock.a 00:03:19.317 LIB libspdk_log.a 00:03:19.317 LIB libspdk_ut.a 00:03:19.317 SO libspdk_ut_mock.so.6.0 00:03:19.317 SO libspdk_log.so.7.0 00:03:19.317 SO libspdk_ut.so.2.0 00:03:19.317 SYMLINK libspdk_ut_mock.so 00:03:19.317 SYMLINK libspdk_ut.so 00:03:19.317 SYMLINK libspdk_log.so 00:03:19.576 CXX lib/trace_parser/trace.o 00:03:19.836 CC lib/dma/dma.o 00:03:19.836 CC lib/util/base64.o 00:03:19.836 CC lib/util/crc16.o 00:03:19.836 CC lib/util/bit_array.o 00:03:19.836 CC lib/util/cpuset.o 00:03:19.836 CC lib/util/crc32.o 00:03:19.836 CC lib/util/crc32_ieee.o 00:03:19.836 CC lib/util/crc64.o 00:03:19.836 CC lib/util/crc32c.o 00:03:19.836 CC lib/ioat/ioat.o 00:03:19.836 CC lib/util/dif.o 00:03:19.836 CC lib/util/fd.o 00:03:19.836 CC lib/util/fd_group.o 00:03:19.836 CC lib/util/file.o 00:03:19.836 CC lib/util/hexlify.o 00:03:19.836 CC lib/util/iov.o 00:03:19.836 CC lib/util/math.o 00:03:19.836 CC lib/util/net.o 00:03:19.836 CC lib/util/pipe.o 00:03:19.836 CC lib/util/strerror_tls.o 00:03:19.836 CC lib/util/string.o 00:03:19.836 CC lib/util/uuid.o 00:03:19.836 CC lib/util/xor.o 00:03:19.836 CC lib/util/zipf.o 00:03:19.836 CC lib/vfio_user/host/vfio_user_pci.o 00:03:19.836 CC lib/vfio_user/host/vfio_user.o 00:03:19.836 LIB libspdk_dma.a 00:03:19.836 SO libspdk_dma.so.4.0 00:03:20.096 SYMLINK libspdk_dma.so 00:03:20.096 LIB libspdk_ioat.a 00:03:20.096 SO libspdk_ioat.so.7.0 00:03:20.096 SYMLINK libspdk_ioat.so 00:03:20.096 LIB libspdk_vfio_user.a 00:03:20.096 SO libspdk_vfio_user.so.5.0 00:03:20.096 LIB libspdk_util.a 00:03:20.096 SYMLINK libspdk_vfio_user.so 00:03:20.355 SO libspdk_util.so.10.0 00:03:20.355 SYMLINK libspdk_util.so 00:03:20.355 LIB libspdk_trace_parser.a 00:03:20.355 SO libspdk_trace_parser.so.5.0 00:03:20.614 SYMLINK libspdk_trace_parser.so 00:03:20.614 CC lib/json/json_parse.o 00:03:20.614 CC lib/json/json_write.o 00:03:20.614 CC lib/json/json_util.o 00:03:20.614 CC lib/rdma_provider/common.o 00:03:20.614 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:20.614 CC lib/env_dpdk/env.o 00:03:20.614 CC lib/idxd/idxd.o 00:03:20.614 CC lib/env_dpdk/memory.o 00:03:20.614 CC lib/env_dpdk/pci.o 00:03:20.614 CC lib/idxd/idxd_user.o 00:03:20.614 CC lib/env_dpdk/init.o 00:03:20.614 CC lib/idxd/idxd_kernel.o 00:03:20.614 CC lib/env_dpdk/threads.o 00:03:20.614 CC lib/env_dpdk/pci_ioat.o 00:03:20.614 CC lib/env_dpdk/pci_virtio.o 00:03:20.614 CC lib/env_dpdk/pci_idxd.o 00:03:20.614 CC lib/env_dpdk/pci_vmd.o 00:03:20.614 CC lib/rdma_utils/rdma_utils.o 00:03:20.614 CC lib/env_dpdk/pci_event.o 00:03:20.614 CC lib/conf/conf.o 00:03:20.614 CC lib/env_dpdk/sigbus_handler.o 00:03:20.614 CC lib/env_dpdk/pci_dpdk.o 00:03:20.872 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:20.872 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:20.872 CC lib/vmd/vmd.o 00:03:20.872 CC lib/vmd/led.o 00:03:20.872 LIB libspdk_rdma_provider.a 00:03:20.872 LIB libspdk_conf.a 00:03:20.872 SO libspdk_rdma_provider.so.6.0 00:03:20.872 LIB libspdk_json.a 00:03:20.872 LIB libspdk_rdma_utils.a 00:03:21.130 SO libspdk_conf.so.6.0 00:03:21.130 SO libspdk_json.so.6.0 00:03:21.130 SYMLINK libspdk_rdma_provider.so 00:03:21.130 SO libspdk_rdma_utils.so.1.0 00:03:21.130 SYMLINK libspdk_conf.so 00:03:21.130 SYMLINK libspdk_json.so 00:03:21.130 SYMLINK libspdk_rdma_utils.so 00:03:21.130 LIB libspdk_idxd.a 00:03:21.130 SO libspdk_idxd.so.12.0 00:03:21.130 LIB libspdk_vmd.a 00:03:21.388 SO libspdk_vmd.so.6.0 00:03:21.388 SYMLINK libspdk_idxd.so 00:03:21.388 SYMLINK libspdk_vmd.so 00:03:21.388 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.388 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.388 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.388 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.646 LIB libspdk_jsonrpc.a 00:03:21.646 SO libspdk_jsonrpc.so.6.0 00:03:21.646 SYMLINK libspdk_jsonrpc.so 00:03:21.646 LIB libspdk_env_dpdk.a 00:03:21.906 SO libspdk_env_dpdk.so.15.0 00:03:21.906 SYMLINK libspdk_env_dpdk.so 00:03:22.165 CC lib/rpc/rpc.o 00:03:22.165 LIB libspdk_rpc.a 00:03:22.424 SO libspdk_rpc.so.6.0 00:03:22.424 SYMLINK libspdk_rpc.so 00:03:22.683 CC lib/keyring/keyring.o 00:03:22.683 CC lib/keyring/keyring_rpc.o 00:03:22.683 CC lib/trace/trace.o 00:03:22.683 CC lib/trace/trace_flags.o 00:03:22.683 CC lib/trace/trace_rpc.o 00:03:22.683 CC lib/notify/notify.o 00:03:22.683 CC lib/notify/notify_rpc.o 00:03:22.942 LIB libspdk_keyring.a 00:03:22.942 LIB libspdk_notify.a 00:03:22.942 SO libspdk_keyring.so.1.0 00:03:22.942 LIB libspdk_trace.a 00:03:22.942 SO libspdk_notify.so.6.0 00:03:22.942 SYMLINK libspdk_keyring.so 00:03:22.942 SO libspdk_trace.so.10.0 00:03:22.942 SYMLINK libspdk_notify.so 00:03:22.942 SYMLINK libspdk_trace.so 00:03:23.509 CC lib/thread/iobuf.o 00:03:23.509 CC lib/thread/thread.o 00:03:23.509 CC lib/sock/sock.o 00:03:23.509 CC lib/sock/sock_rpc.o 00:03:23.767 LIB libspdk_sock.a 00:03:23.767 SO libspdk_sock.so.10.0 00:03:23.767 SYMLINK libspdk_sock.so 00:03:24.333 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.334 CC lib/nvme/nvme_ctrlr.o 00:03:24.334 CC lib/nvme/nvme_fabric.o 00:03:24.334 CC lib/nvme/nvme_ns_cmd.o 00:03:24.334 CC lib/nvme/nvme_ns.o 00:03:24.334 CC lib/nvme/nvme_pcie_common.o 00:03:24.334 CC lib/nvme/nvme_pcie.o 00:03:24.334 CC lib/nvme/nvme_qpair.o 00:03:24.334 CC lib/nvme/nvme.o 00:03:24.334 CC lib/nvme/nvme_quirks.o 00:03:24.334 CC lib/nvme/nvme_transport.o 00:03:24.334 CC lib/nvme/nvme_discovery.o 00:03:24.334 CC lib/nvme/nvme_tcp.o 00:03:24.334 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.334 CC lib/nvme/nvme_opal.o 00:03:24.334 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.334 CC lib/nvme/nvme_io_msg.o 00:03:24.334 CC lib/nvme/nvme_poll_group.o 00:03:24.334 CC lib/nvme/nvme_zns.o 00:03:24.334 CC lib/nvme/nvme_stubs.o 00:03:24.334 CC lib/nvme/nvme_cuse.o 00:03:24.334 CC lib/nvme/nvme_auth.o 00:03:24.334 CC lib/nvme/nvme_rdma.o 00:03:24.334 LIB libspdk_thread.a 00:03:24.592 SO libspdk_thread.so.10.1 00:03:24.592 SYMLINK libspdk_thread.so 00:03:24.851 CC lib/init/json_config.o 00:03:24.851 CC lib/init/subsystem.o 00:03:24.851 CC lib/init/subsystem_rpc.o 00:03:24.851 CC lib/init/rpc.o 00:03:24.851 CC lib/virtio/virtio_vfio_user.o 00:03:24.851 CC lib/virtio/virtio.o 00:03:24.851 CC lib/virtio/virtio_vhost_user.o 00:03:24.851 CC lib/virtio/virtio_pci.o 00:03:24.851 CC lib/accel/accel.o 00:03:24.851 CC lib/accel/accel_rpc.o 00:03:24.851 CC lib/accel/accel_sw.o 00:03:24.851 CC lib/blob/request.o 00:03:24.851 CC lib/blob/blobstore.o 00:03:24.851 CC lib/blob/zeroes.o 00:03:24.851 CC lib/blob/blob_bs_dev.o 00:03:25.110 LIB libspdk_init.a 00:03:25.110 SO libspdk_init.so.5.0 00:03:25.110 LIB libspdk_virtio.a 00:03:25.110 SO libspdk_virtio.so.7.0 00:03:25.110 SYMLINK libspdk_init.so 00:03:25.368 SYMLINK libspdk_virtio.so 00:03:25.626 CC lib/event/app.o 00:03:25.626 CC lib/event/reactor.o 00:03:25.626 CC lib/event/log_rpc.o 00:03:25.626 CC lib/event/app_rpc.o 00:03:25.626 CC lib/event/scheduler_static.o 00:03:25.626 LIB libspdk_accel.a 00:03:25.626 SO libspdk_accel.so.16.0 00:03:25.626 SYMLINK libspdk_accel.so 00:03:25.626 LIB libspdk_nvme.a 00:03:25.884 SO libspdk_nvme.so.13.1 00:03:25.884 LIB libspdk_event.a 00:03:25.884 SO libspdk_event.so.14.0 00:03:25.884 SYMLINK libspdk_event.so 00:03:26.143 SYMLINK libspdk_nvme.so 00:03:26.143 CC lib/bdev/bdev.o 00:03:26.143 CC lib/bdev/bdev_rpc.o 00:03:26.143 CC lib/bdev/bdev_zone.o 00:03:26.143 CC lib/bdev/part.o 00:03:26.143 CC lib/bdev/scsi_nvme.o 00:03:27.079 LIB libspdk_blob.a 00:03:27.079 SO libspdk_blob.so.11.0 00:03:27.079 SYMLINK libspdk_blob.so 00:03:27.338 CC lib/lvol/lvol.o 00:03:27.338 CC lib/blobfs/blobfs.o 00:03:27.338 CC lib/blobfs/tree.o 00:03:27.904 LIB libspdk_bdev.a 00:03:27.904 SO libspdk_bdev.so.16.0 00:03:27.904 LIB libspdk_blobfs.a 00:03:27.904 SYMLINK libspdk_bdev.so 00:03:28.162 LIB libspdk_lvol.a 00:03:28.162 SO libspdk_blobfs.so.10.0 00:03:28.162 SO libspdk_lvol.so.10.0 00:03:28.162 SYMLINK libspdk_blobfs.so 00:03:28.162 SYMLINK libspdk_lvol.so 00:03:28.420 CC lib/nvmf/ctrlr_discovery.o 00:03:28.420 CC lib/nvmf/ctrlr.o 00:03:28.420 CC lib/nvmf/ctrlr_bdev.o 00:03:28.420 CC lib/nvmf/subsystem.o 00:03:28.420 CC lib/nvmf/transport.o 00:03:28.420 CC lib/nvmf/nvmf.o 00:03:28.420 CC lib/nvmf/nvmf_rpc.o 00:03:28.420 CC lib/nvmf/tcp.o 00:03:28.420 CC lib/nvmf/stubs.o 00:03:28.420 CC lib/nvmf/mdns_server.o 00:03:28.420 CC lib/nvmf/rdma.o 00:03:28.420 CC lib/nvmf/auth.o 00:03:28.420 CC lib/nbd/nbd_rpc.o 00:03:28.420 CC lib/nbd/nbd.o 00:03:28.420 CC lib/ublk/ublk.o 00:03:28.420 CC lib/ublk/ublk_rpc.o 00:03:28.420 CC lib/scsi/lun.o 00:03:28.420 CC lib/scsi/dev.o 00:03:28.420 CC lib/scsi/port.o 00:03:28.420 CC lib/scsi/scsi.o 00:03:28.420 CC lib/scsi/scsi_bdev.o 00:03:28.420 CC lib/scsi/scsi_pr.o 00:03:28.420 CC lib/ftl/ftl_core.o 00:03:28.420 CC lib/scsi/scsi_rpc.o 00:03:28.420 CC lib/ftl/ftl_init.o 00:03:28.420 CC lib/scsi/task.o 00:03:28.420 CC lib/ftl/ftl_layout.o 00:03:28.420 CC lib/ftl/ftl_debug.o 00:03:28.420 CC lib/ftl/ftl_io.o 00:03:28.420 CC lib/ftl/ftl_sb.o 00:03:28.420 CC lib/ftl/ftl_l2p.o 00:03:28.420 CC lib/ftl/ftl_l2p_flat.o 00:03:28.420 CC lib/ftl/ftl_nv_cache.o 00:03:28.420 CC lib/ftl/ftl_band.o 00:03:28.420 CC lib/ftl/ftl_band_ops.o 00:03:28.420 CC lib/ftl/ftl_writer.o 00:03:28.420 CC lib/ftl/ftl_rq.o 00:03:28.420 CC lib/ftl/ftl_reloc.o 00:03:28.420 CC lib/ftl/ftl_l2p_cache.o 00:03:28.420 CC lib/ftl/ftl_p2l.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.420 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.420 CC lib/ftl/utils/ftl_md.o 00:03:28.420 CC lib/ftl/utils/ftl_conf.o 00:03:28.420 CC lib/ftl/utils/ftl_mempool.o 00:03:28.420 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.420 CC lib/ftl/utils/ftl_property.o 00:03:28.420 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.420 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.420 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.420 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.420 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.420 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.420 CC lib/ftl/base/ftl_base_dev.o 00:03:28.420 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.420 CC lib/ftl/ftl_trace.o 00:03:28.984 LIB libspdk_nbd.a 00:03:28.984 SO libspdk_nbd.so.7.0 00:03:28.984 SYMLINK libspdk_nbd.so 00:03:28.984 LIB libspdk_scsi.a 00:03:28.984 SO libspdk_scsi.so.9.0 00:03:29.269 SYMLINK libspdk_scsi.so 00:03:29.269 LIB libspdk_ublk.a 00:03:29.269 SO libspdk_ublk.so.3.0 00:03:29.269 SYMLINK libspdk_ublk.so 00:03:29.269 LIB libspdk_ftl.a 00:03:29.528 CC lib/iscsi/conn.o 00:03:29.528 CC lib/iscsi/init_grp.o 00:03:29.528 CC lib/iscsi/iscsi.o 00:03:29.528 CC lib/iscsi/md5.o 00:03:29.528 CC lib/iscsi/param.o 00:03:29.528 CC lib/iscsi/tgt_node.o 00:03:29.528 CC lib/iscsi/portal_grp.o 00:03:29.528 CC lib/iscsi/iscsi_subsystem.o 00:03:29.528 CC lib/iscsi/iscsi_rpc.o 00:03:29.528 CC lib/iscsi/task.o 00:03:29.528 CC lib/vhost/vhost.o 00:03:29.528 CC lib/vhost/vhost_rpc.o 00:03:29.528 CC lib/vhost/rte_vhost_user.o 00:03:29.528 CC lib/vhost/vhost_scsi.o 00:03:29.528 CC lib/vhost/vhost_blk.o 00:03:29.528 SO libspdk_ftl.so.9.0 00:03:29.786 SYMLINK libspdk_ftl.so 00:03:30.045 LIB libspdk_nvmf.a 00:03:30.045 SO libspdk_nvmf.so.19.0 00:03:30.304 SYMLINK libspdk_nvmf.so 00:03:30.304 LIB libspdk_vhost.a 00:03:30.304 SO libspdk_vhost.so.8.0 00:03:30.304 SYMLINK libspdk_vhost.so 00:03:30.563 LIB libspdk_iscsi.a 00:03:30.563 SO libspdk_iscsi.so.8.0 00:03:30.563 SYMLINK libspdk_iscsi.so 00:03:31.130 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.388 LIB libspdk_env_dpdk_rpc.a 00:03:31.388 CC module/accel/error/accel_error.o 00:03:31.388 CC module/accel/error/accel_error_rpc.o 00:03:31.388 CC module/accel/iaa/accel_iaa.o 00:03:31.388 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.388 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.388 CC module/accel/dsa/accel_dsa.o 00:03:31.388 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.388 CC module/keyring/file/keyring.o 00:03:31.388 CC module/keyring/file/keyring_rpc.o 00:03:31.388 CC module/accel/ioat/accel_ioat.o 00:03:31.388 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.388 CC module/sock/posix/posix.o 00:03:31.388 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.388 CC module/keyring/linux/keyring.o 00:03:31.388 CC module/keyring/linux/keyring_rpc.o 00:03:31.388 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.388 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.388 CC module/blob/bdev/blob_bdev.o 00:03:31.388 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.388 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.388 LIB libspdk_accel_error.a 00:03:31.646 LIB libspdk_keyring_linux.a 00:03:31.646 LIB libspdk_keyring_file.a 00:03:31.646 LIB libspdk_accel_iaa.a 00:03:31.646 LIB libspdk_scheduler_gscheduler.a 00:03:31.646 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.646 SO libspdk_keyring_file.so.1.0 00:03:31.646 SO libspdk_accel_error.so.2.0 00:03:31.646 LIB libspdk_accel_ioat.a 00:03:31.646 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.646 SO libspdk_keyring_linux.so.1.0 00:03:31.646 LIB libspdk_scheduler_dynamic.a 00:03:31.646 LIB libspdk_accel_dsa.a 00:03:31.646 SO libspdk_accel_iaa.so.3.0 00:03:31.646 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.646 SO libspdk_accel_dsa.so.5.0 00:03:31.646 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.646 SO libspdk_accel_ioat.so.6.0 00:03:31.646 SYMLINK libspdk_keyring_file.so 00:03:31.646 LIB libspdk_blob_bdev.a 00:03:31.646 SYMLINK libspdk_accel_error.so 00:03:31.646 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.646 SYMLINK libspdk_accel_iaa.so 00:03:31.646 SYMLINK libspdk_keyring_linux.so 00:03:31.646 SO libspdk_blob_bdev.so.11.0 00:03:31.646 SYMLINK libspdk_accel_dsa.so 00:03:31.646 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.646 SYMLINK libspdk_accel_ioat.so 00:03:31.646 SYMLINK libspdk_blob_bdev.so 00:03:31.906 LIB libspdk_sock_posix.a 00:03:31.906 SO libspdk_sock_posix.so.6.0 00:03:32.165 SYMLINK libspdk_sock_posix.so 00:03:32.165 CC module/bdev/nvme/bdev_nvme.o 00:03:32.165 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.165 CC module/bdev/nvme/nvme_rpc.o 00:03:32.165 CC module/bdev/nvme/vbdev_opal.o 00:03:32.165 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.165 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.165 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.165 CC module/bdev/malloc/bdev_malloc.o 00:03:32.165 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.165 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.165 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.165 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.165 CC module/bdev/error/vbdev_error.o 00:03:32.165 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.165 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.165 CC module/bdev/delay/vbdev_delay.o 00:03:32.165 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.165 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.165 CC module/bdev/gpt/gpt.o 00:03:32.165 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.165 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.165 CC module/bdev/split/vbdev_split.o 00:03:32.165 CC module/bdev/ftl/bdev_ftl.o 00:03:32.165 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.165 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.165 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.165 CC module/bdev/raid/bdev_raid.o 00:03:32.165 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.165 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.165 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.165 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.165 CC module/bdev/null/bdev_null.o 00:03:32.165 CC module/bdev/null/bdev_null_rpc.o 00:03:32.165 CC module/bdev/raid/raid0.o 00:03:32.165 CC module/bdev/raid/raid1.o 00:03:32.165 CC module/bdev/raid/concat.o 00:03:32.165 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.165 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.424 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.424 CC module/bdev/aio/bdev_aio.o 00:03:32.424 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.424 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.424 LIB libspdk_blobfs_bdev.a 00:03:32.424 SO libspdk_blobfs_bdev.so.6.0 00:03:32.424 LIB libspdk_bdev_split.a 00:03:32.682 LIB libspdk_bdev_error.a 00:03:32.682 LIB libspdk_bdev_gpt.a 00:03:32.683 LIB libspdk_bdev_null.a 00:03:32.683 SO libspdk_bdev_split.so.6.0 00:03:32.683 SO libspdk_bdev_error.so.6.0 00:03:32.683 SO libspdk_bdev_gpt.so.6.0 00:03:32.683 SYMLINK libspdk_blobfs_bdev.so 00:03:32.683 SO libspdk_bdev_null.so.6.0 00:03:32.683 LIB libspdk_bdev_passthru.a 00:03:32.683 LIB libspdk_bdev_ftl.a 00:03:32.683 LIB libspdk_bdev_aio.a 00:03:32.683 LIB libspdk_bdev_zone_block.a 00:03:32.683 SYMLINK libspdk_bdev_split.so 00:03:32.683 LIB libspdk_bdev_delay.a 00:03:32.683 SO libspdk_bdev_passthru.so.6.0 00:03:32.683 LIB libspdk_bdev_malloc.a 00:03:32.683 LIB libspdk_bdev_iscsi.a 00:03:32.683 SO libspdk_bdev_ftl.so.6.0 00:03:32.683 SYMLINK libspdk_bdev_error.so 00:03:32.683 SYMLINK libspdk_bdev_null.so 00:03:32.683 SO libspdk_bdev_zone_block.so.6.0 00:03:32.683 SO libspdk_bdev_aio.so.6.0 00:03:32.683 SYMLINK libspdk_bdev_gpt.so 00:03:32.683 SO libspdk_bdev_delay.so.6.0 00:03:32.683 SO libspdk_bdev_malloc.so.6.0 00:03:32.683 SO libspdk_bdev_iscsi.so.6.0 00:03:32.683 SYMLINK libspdk_bdev_passthru.so 00:03:32.683 SYMLINK libspdk_bdev_ftl.so 00:03:32.683 SYMLINK libspdk_bdev_zone_block.so 00:03:32.683 LIB libspdk_bdev_lvol.a 00:03:32.683 SYMLINK libspdk_bdev_aio.so 00:03:32.683 SYMLINK libspdk_bdev_iscsi.so 00:03:32.683 SYMLINK libspdk_bdev_malloc.so 00:03:32.683 SYMLINK libspdk_bdev_delay.so 00:03:32.683 SO libspdk_bdev_lvol.so.6.0 00:03:32.683 LIB libspdk_bdev_virtio.a 00:03:32.942 SO libspdk_bdev_virtio.so.6.0 00:03:32.942 SYMLINK libspdk_bdev_lvol.so 00:03:32.942 SYMLINK libspdk_bdev_virtio.so 00:03:33.201 LIB libspdk_bdev_raid.a 00:03:33.201 SO libspdk_bdev_raid.so.6.0 00:03:33.201 SYMLINK libspdk_bdev_raid.so 00:03:33.767 LIB libspdk_bdev_nvme.a 00:03:33.767 SO libspdk_bdev_nvme.so.7.0 00:03:34.025 SYMLINK libspdk_bdev_nvme.so 00:03:34.591 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.591 CC module/event/subsystems/sock/sock.o 00:03:34.591 CC module/event/subsystems/keyring/keyring.o 00:03:34.591 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.591 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.591 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.591 CC module/event/subsystems/vmd/vmd.o 00:03:34.591 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.850 LIB libspdk_event_keyring.a 00:03:34.850 LIB libspdk_event_scheduler.a 00:03:34.850 LIB libspdk_event_sock.a 00:03:34.850 LIB libspdk_event_vhost_blk.a 00:03:34.850 LIB libspdk_event_iobuf.a 00:03:34.850 LIB libspdk_event_vmd.a 00:03:34.850 SO libspdk_event_vhost_blk.so.3.0 00:03:34.850 SO libspdk_event_sock.so.5.0 00:03:34.850 SO libspdk_event_keyring.so.1.0 00:03:34.850 SO libspdk_event_scheduler.so.4.0 00:03:34.850 SO libspdk_event_vmd.so.6.0 00:03:34.850 SO libspdk_event_iobuf.so.3.0 00:03:34.850 SYMLINK libspdk_event_sock.so 00:03:34.850 SYMLINK libspdk_event_scheduler.so 00:03:34.850 SYMLINK libspdk_event_keyring.so 00:03:34.850 SYMLINK libspdk_event_vhost_blk.so 00:03:34.850 SYMLINK libspdk_event_vmd.so 00:03:34.850 SYMLINK libspdk_event_iobuf.so 00:03:35.416 CC module/event/subsystems/accel/accel.o 00:03:35.416 LIB libspdk_event_accel.a 00:03:35.416 SO libspdk_event_accel.so.6.0 00:03:35.674 SYMLINK libspdk_event_accel.so 00:03:35.933 CC module/event/subsystems/bdev/bdev.o 00:03:36.191 LIB libspdk_event_bdev.a 00:03:36.191 SO libspdk_event_bdev.so.6.0 00:03:36.191 SYMLINK libspdk_event_bdev.so 00:03:36.448 CC module/event/subsystems/scsi/scsi.o 00:03:36.448 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.448 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.707 CC module/event/subsystems/ublk/ublk.o 00:03:36.707 CC module/event/subsystems/nbd/nbd.o 00:03:36.707 LIB libspdk_event_scsi.a 00:03:36.707 LIB libspdk_event_nbd.a 00:03:36.707 LIB libspdk_event_ublk.a 00:03:36.707 SO libspdk_event_scsi.so.6.0 00:03:36.707 SO libspdk_event_nbd.so.6.0 00:03:36.707 LIB libspdk_event_nvmf.a 00:03:36.707 SO libspdk_event_ublk.so.3.0 00:03:36.707 SO libspdk_event_nvmf.so.6.0 00:03:36.707 SYMLINK libspdk_event_scsi.so 00:03:36.707 SYMLINK libspdk_event_nbd.so 00:03:36.965 SYMLINK libspdk_event_ublk.so 00:03:36.965 SYMLINK libspdk_event_nvmf.so 00:03:37.223 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.223 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.223 LIB libspdk_event_vhost_scsi.a 00:03:37.223 LIB libspdk_event_iscsi.a 00:03:37.223 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.480 SO libspdk_event_iscsi.so.6.0 00:03:37.480 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.480 SYMLINK libspdk_event_iscsi.so 00:03:37.738 SO libspdk.so.6.0 00:03:37.738 SYMLINK libspdk.so 00:03:37.998 TEST_HEADER include/spdk/accel.h 00:03:37.998 TEST_HEADER include/spdk/assert.h 00:03:37.998 TEST_HEADER include/spdk/accel_module.h 00:03:37.998 CC app/spdk_top/spdk_top.o 00:03:37.998 TEST_HEADER include/spdk/barrier.h 00:03:37.998 TEST_HEADER include/spdk/bdev.h 00:03:37.998 CC test/rpc_client/rpc_client_test.o 00:03:37.998 TEST_HEADER include/spdk/base64.h 00:03:37.998 CC app/trace_record/trace_record.o 00:03:37.998 CC app/spdk_lspci/spdk_lspci.o 00:03:37.998 TEST_HEADER include/spdk/bdev_module.h 00:03:37.998 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.998 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.998 TEST_HEADER include/spdk/bit_pool.h 00:03:37.998 CC app/spdk_nvme_perf/perf.o 00:03:37.998 TEST_HEADER include/spdk/bit_array.h 00:03:37.998 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.998 TEST_HEADER include/spdk/blobfs.h 00:03:37.998 TEST_HEADER include/spdk/conf.h 00:03:37.998 TEST_HEADER include/spdk/blob.h 00:03:37.998 TEST_HEADER include/spdk/config.h 00:03:37.998 CXX app/trace/trace.o 00:03:37.998 CC app/spdk_nvme_identify/identify.o 00:03:37.998 TEST_HEADER include/spdk/crc16.h 00:03:37.998 TEST_HEADER include/spdk/cpuset.h 00:03:37.998 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.998 TEST_HEADER include/spdk/dif.h 00:03:37.998 TEST_HEADER include/spdk/crc64.h 00:03:37.998 TEST_HEADER include/spdk/crc32.h 00:03:37.998 TEST_HEADER include/spdk/endian.h 00:03:37.998 TEST_HEADER include/spdk/dma.h 00:03:37.998 TEST_HEADER include/spdk/event.h 00:03:37.998 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.998 TEST_HEADER include/spdk/fd.h 00:03:37.998 TEST_HEADER include/spdk/fd_group.h 00:03:37.998 TEST_HEADER include/spdk/env.h 00:03:37.998 TEST_HEADER include/spdk/file.h 00:03:37.998 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.998 TEST_HEADER include/spdk/ftl.h 00:03:37.998 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.998 TEST_HEADER include/spdk/histogram_data.h 00:03:37.998 TEST_HEADER include/spdk/hexlify.h 00:03:37.998 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.998 TEST_HEADER include/spdk/init.h 00:03:37.998 TEST_HEADER include/spdk/idxd.h 00:03:37.998 TEST_HEADER include/spdk/ioat.h 00:03:37.998 TEST_HEADER include/spdk/ioat_spec.h 00:03:37.998 TEST_HEADER include/spdk/jsonrpc.h 00:03:37.998 TEST_HEADER include/spdk/iscsi_spec.h 00:03:37.998 TEST_HEADER include/spdk/json.h 00:03:37.998 TEST_HEADER include/spdk/keyring.h 00:03:37.998 TEST_HEADER include/spdk/likely.h 00:03:37.998 TEST_HEADER include/spdk/keyring_module.h 00:03:37.998 TEST_HEADER include/spdk/log.h 00:03:37.998 TEST_HEADER include/spdk/lvol.h 00:03:37.998 TEST_HEADER include/spdk/mmio.h 00:03:37.998 TEST_HEADER include/spdk/memory.h 00:03:37.998 TEST_HEADER include/spdk/nbd.h 00:03:37.998 TEST_HEADER include/spdk/net.h 00:03:37.998 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.998 TEST_HEADER include/spdk/nvme_intel.h 00:03:37.998 TEST_HEADER include/spdk/notify.h 00:03:37.998 TEST_HEADER include/spdk/nvme.h 00:03:37.998 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:37.998 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:37.998 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:37.998 TEST_HEADER include/spdk/nvme_zns.h 00:03:37.998 TEST_HEADER include/spdk/nvme_spec.h 00:03:37.998 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:37.998 TEST_HEADER include/spdk/nvmf.h 00:03:37.998 CC app/nvmf_tgt/nvmf_main.o 00:03:37.998 CC app/spdk_dd/spdk_dd.o 00:03:37.998 TEST_HEADER include/spdk/nvmf_spec.h 00:03:37.998 TEST_HEADER include/spdk/nvmf_transport.h 00:03:37.998 TEST_HEADER include/spdk/opal.h 00:03:37.998 TEST_HEADER include/spdk/opal_spec.h 00:03:37.998 TEST_HEADER include/spdk/pci_ids.h 00:03:37.998 TEST_HEADER include/spdk/pipe.h 00:03:37.999 TEST_HEADER include/spdk/queue.h 00:03:37.999 TEST_HEADER include/spdk/reduce.h 00:03:37.999 TEST_HEADER include/spdk/scsi.h 00:03:37.999 TEST_HEADER include/spdk/rpc.h 00:03:37.999 TEST_HEADER include/spdk/scheduler.h 00:03:37.999 TEST_HEADER include/spdk/scsi_spec.h 00:03:37.999 TEST_HEADER include/spdk/sock.h 00:03:37.999 TEST_HEADER include/spdk/stdinc.h 00:03:37.999 TEST_HEADER include/spdk/thread.h 00:03:37.999 TEST_HEADER include/spdk/string.h 00:03:37.999 TEST_HEADER include/spdk/trace.h 00:03:37.999 TEST_HEADER include/spdk/tree.h 00:03:37.999 TEST_HEADER include/spdk/ublk.h 00:03:37.999 TEST_HEADER include/spdk/trace_parser.h 00:03:37.999 TEST_HEADER include/spdk/uuid.h 00:03:37.999 TEST_HEADER include/spdk/util.h 00:03:37.999 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:37.999 TEST_HEADER include/spdk/version.h 00:03:37.999 TEST_HEADER include/spdk/vhost.h 00:03:37.999 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:37.999 TEST_HEADER include/spdk/vmd.h 00:03:37.999 TEST_HEADER include/spdk/xor.h 00:03:37.999 CC app/spdk_tgt/spdk_tgt.o 00:03:37.999 TEST_HEADER include/spdk/zipf.h 00:03:37.999 CXX test/cpp_headers/accel.o 00:03:37.999 CXX test/cpp_headers/assert.o 00:03:37.999 CXX test/cpp_headers/barrier.o 00:03:37.999 CXX test/cpp_headers/accel_module.o 00:03:37.999 CXX test/cpp_headers/base64.o 00:03:37.999 CXX test/cpp_headers/bdev.o 00:03:37.999 CXX test/cpp_headers/bdev_module.o 00:03:37.999 CXX test/cpp_headers/bdev_zone.o 00:03:37.999 CXX test/cpp_headers/bit_array.o 00:03:37.999 CXX test/cpp_headers/bit_pool.o 00:03:37.999 CXX test/cpp_headers/blobfs_bdev.o 00:03:37.999 CXX test/cpp_headers/blob_bdev.o 00:03:37.999 CXX test/cpp_headers/blob.o 00:03:37.999 CXX test/cpp_headers/blobfs.o 00:03:37.999 CXX test/cpp_headers/config.o 00:03:37.999 CXX test/cpp_headers/conf.o 00:03:37.999 CXX test/cpp_headers/cpuset.o 00:03:37.999 CXX test/cpp_headers/crc16.o 00:03:37.999 CXX test/cpp_headers/crc32.o 00:03:37.999 CXX test/cpp_headers/crc64.o 00:03:37.999 CXX test/cpp_headers/dif.o 00:03:37.999 CXX test/cpp_headers/endian.o 00:03:37.999 CXX test/cpp_headers/dma.o 00:03:37.999 CXX test/cpp_headers/env.o 00:03:37.999 CXX test/cpp_headers/env_dpdk.o 00:03:37.999 CXX test/cpp_headers/fd.o 00:03:37.999 CXX test/cpp_headers/fd_group.o 00:03:37.999 CXX test/cpp_headers/event.o 00:03:37.999 CXX test/cpp_headers/file.o 00:03:37.999 CXX test/cpp_headers/ftl.o 00:03:37.999 CXX test/cpp_headers/gpt_spec.o 00:03:37.999 CXX test/cpp_headers/hexlify.o 00:03:37.999 CXX test/cpp_headers/idxd_spec.o 00:03:37.999 CXX test/cpp_headers/histogram_data.o 00:03:37.999 CXX test/cpp_headers/idxd.o 00:03:37.999 CXX test/cpp_headers/init.o 00:03:37.999 CXX test/cpp_headers/ioat.o 00:03:37.999 CXX test/cpp_headers/ioat_spec.o 00:03:37.999 CXX test/cpp_headers/iscsi_spec.o 00:03:37.999 CXX test/cpp_headers/json.o 00:03:37.999 CXX test/cpp_headers/keyring.o 00:03:37.999 CXX test/cpp_headers/jsonrpc.o 00:03:37.999 CXX test/cpp_headers/keyring_module.o 00:03:37.999 CXX test/cpp_headers/log.o 00:03:37.999 CXX test/cpp_headers/lvol.o 00:03:37.999 CXX test/cpp_headers/likely.o 00:03:37.999 CXX test/cpp_headers/mmio.o 00:03:37.999 CXX test/cpp_headers/memory.o 00:03:38.273 CXX test/cpp_headers/nbd.o 00:03:38.273 CXX test/cpp_headers/net.o 00:03:38.273 CXX test/cpp_headers/notify.o 00:03:38.273 CXX test/cpp_headers/nvme_intel.o 00:03:38.273 CXX test/cpp_headers/nvme.o 00:03:38.273 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.273 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.273 CXX test/cpp_headers/nvme_spec.o 00:03:38.273 CXX test/cpp_headers/nvme_zns.o 00:03:38.273 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.273 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.273 CXX test/cpp_headers/nvmf.o 00:03:38.273 CXX test/cpp_headers/nvmf_spec.o 00:03:38.273 CXX test/cpp_headers/nvmf_transport.o 00:03:38.273 CXX test/cpp_headers/opal.o 00:03:38.273 CXX test/cpp_headers/opal_spec.o 00:03:38.273 CXX test/cpp_headers/pci_ids.o 00:03:38.273 CXX test/cpp_headers/pipe.o 00:03:38.273 CXX test/cpp_headers/queue.o 00:03:38.273 CXX test/cpp_headers/rpc.o 00:03:38.273 CXX test/cpp_headers/reduce.o 00:03:38.273 CXX test/cpp_headers/scheduler.o 00:03:38.273 CXX test/cpp_headers/scsi.o 00:03:38.273 CC test/thread/poller_perf/poller_perf.o 00:03:38.273 CXX test/cpp_headers/scsi_spec.o 00:03:38.273 CXX test/cpp_headers/sock.o 00:03:38.273 CC examples/ioat/verify/verify.o 00:03:38.273 CXX test/cpp_headers/stdinc.o 00:03:38.273 CXX test/cpp_headers/string.o 00:03:38.273 CXX test/cpp_headers/thread.o 00:03:38.273 CXX test/cpp_headers/trace.o 00:03:38.273 CXX test/cpp_headers/trace_parser.o 00:03:38.273 CXX test/cpp_headers/tree.o 00:03:38.273 CXX test/cpp_headers/ublk.o 00:03:38.273 CC test/app/stub/stub.o 00:03:38.273 CXX test/cpp_headers/util.o 00:03:38.273 CC examples/util/zipf/zipf.o 00:03:38.273 CC test/app/histogram_perf/histogram_perf.o 00:03:38.273 CC test/env/vtophys/vtophys.o 00:03:38.273 CC examples/ioat/perf/perf.o 00:03:38.273 CC test/app/jsoncat/jsoncat.o 00:03:38.273 CXX test/cpp_headers/uuid.o 00:03:38.273 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.273 CC app/fio/nvme/fio_plugin.o 00:03:38.273 CC test/env/memory/memory_ut.o 00:03:38.273 CC test/env/pci/pci_ut.o 00:03:38.273 CC test/app/bdev_svc/bdev_svc.o 00:03:38.273 CXX test/cpp_headers/version.o 00:03:38.273 CC test/dma/test_dma/test_dma.o 00:03:38.273 CC app/fio/bdev/fio_plugin.o 00:03:38.273 CXX test/cpp_headers/vfio_user_pci.o 00:03:38.273 LINK spdk_lspci 00:03:38.548 CXX test/cpp_headers/vfio_user_spec.o 00:03:38.548 LINK rpc_client_test 00:03:38.548 LINK interrupt_tgt 00:03:38.820 LINK spdk_nvme_discover 00:03:38.820 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.820 LINK nvmf_tgt 00:03:38.820 LINK iscsi_tgt 00:03:38.820 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.820 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.820 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.820 LINK spdk_trace_record 00:03:38.820 LINK poller_perf 00:03:38.820 LINK vtophys 00:03:38.820 LINK spdk_tgt 00:03:38.820 LINK zipf 00:03:38.820 LINK jsoncat 00:03:38.820 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.820 LINK histogram_perf 00:03:39.079 CXX test/cpp_headers/vhost.o 00:03:39.079 CXX test/cpp_headers/vmd.o 00:03:39.079 CXX test/cpp_headers/xor.o 00:03:39.079 LINK stub 00:03:39.079 CXX test/cpp_headers/zipf.o 00:03:39.079 LINK env_dpdk_post_init 00:03:39.079 LINK bdev_svc 00:03:39.079 LINK verify 00:03:39.079 LINK ioat_perf 00:03:39.079 LINK spdk_dd 00:03:39.079 LINK spdk_trace 00:03:39.079 LINK pci_ut 00:03:39.337 LINK test_dma 00:03:39.337 LINK spdk_bdev 00:03:39.337 LINK vhost_fuzz 00:03:39.337 LINK spdk_nvme 00:03:39.337 LINK nvme_fuzz 00:03:39.337 LINK spdk_nvme_identify 00:03:39.337 LINK spdk_top 00:03:39.337 LINK mem_callbacks 00:03:39.337 LINK spdk_nvme_perf 00:03:39.596 CC app/vhost/vhost.o 00:03:39.596 CC test/event/reactor_perf/reactor_perf.o 00:03:39.596 CC test/event/reactor/reactor.o 00:03:39.596 CC test/event/event_perf/event_perf.o 00:03:39.596 CC examples/idxd/perf/perf.o 00:03:39.596 CC examples/sock/hello_world/hello_sock.o 00:03:39.596 CC examples/thread/thread/thread_ex.o 00:03:39.596 CC test/event/scheduler/scheduler.o 00:03:39.596 CC examples/vmd/led/led.o 00:03:39.596 CC test/event/app_repeat/app_repeat.o 00:03:39.596 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.596 LINK reactor 00:03:39.596 LINK reactor_perf 00:03:39.596 LINK event_perf 00:03:39.596 LINK vhost 00:03:39.596 LINK lsvmd 00:03:39.596 LINK led 00:03:39.855 LINK memory_ut 00:03:39.855 LINK app_repeat 00:03:39.855 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.855 CC test/nvme/e2edp/nvme_dp.o 00:03:39.855 CC test/nvme/sgl/sgl.o 00:03:39.855 CC test/nvme/err_injection/err_injection.o 00:03:39.855 CC test/nvme/fdp/fdp.o 00:03:39.855 CC test/nvme/compliance/nvme_compliance.o 00:03:39.855 CC test/nvme/startup/startup.o 00:03:39.855 CC test/nvme/reset/reset.o 00:03:39.855 CC test/nvme/cuse/cuse.o 00:03:39.855 CC test/nvme/overhead/overhead.o 00:03:39.855 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.855 CC test/nvme/reserve/reserve.o 00:03:39.855 CC test/nvme/boot_partition/boot_partition.o 00:03:39.855 CC test/nvme/connect_stress/connect_stress.o 00:03:39.855 CC test/nvme/aer/aer.o 00:03:39.855 CC test/accel/dif/dif.o 00:03:39.855 CC test/nvme/simple_copy/simple_copy.o 00:03:39.855 CC test/blobfs/mkfs/mkfs.o 00:03:39.855 LINK hello_sock 00:03:39.855 LINK scheduler 00:03:39.855 LINK thread 00:03:39.855 LINK idxd_perf 00:03:39.855 CC test/lvol/esnap/esnap.o 00:03:39.855 LINK err_injection 00:03:39.855 LINK boot_partition 00:03:39.855 LINK connect_stress 00:03:39.855 LINK doorbell_aers 00:03:39.855 LINK startup 00:03:39.855 LINK reserve 00:03:39.855 LINK fused_ordering 00:03:40.114 LINK mkfs 00:03:40.114 LINK simple_copy 00:03:40.114 LINK nvme_dp 00:03:40.114 LINK reset 00:03:40.114 LINK sgl 00:03:40.114 LINK overhead 00:03:40.114 LINK nvme_compliance 00:03:40.114 LINK aer 00:03:40.114 LINK fdp 00:03:40.114 LINK dif 00:03:40.114 LINK iscsi_fuzz 00:03:40.372 CC examples/nvme/arbitration/arbitration.o 00:03:40.372 CC examples/nvme/hotplug/hotplug.o 00:03:40.372 CC examples/nvme/reconnect/reconnect.o 00:03:40.372 CC examples/nvme/abort/abort.o 00:03:40.372 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.372 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.372 CC examples/nvme/hello_world/hello_world.o 00:03:40.372 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.372 CC examples/accel/perf/accel_perf.o 00:03:40.372 CC examples/blob/hello_world/hello_blob.o 00:03:40.372 CC examples/blob/cli/blobcli.o 00:03:40.372 LINK cmb_copy 00:03:40.372 LINK pmr_persistence 00:03:40.372 LINK hotplug 00:03:40.372 LINK hello_world 00:03:40.629 LINK arbitration 00:03:40.629 LINK reconnect 00:03:40.629 LINK abort 00:03:40.629 LINK hello_blob 00:03:40.629 LINK nvme_manage 00:03:40.629 CC test/bdev/bdevio/bdevio.o 00:03:40.629 LINK accel_perf 00:03:40.629 LINK cuse 00:03:40.886 LINK blobcli 00:03:40.886 LINK bdevio 00:03:41.144 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.144 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.402 LINK hello_bdev 00:03:41.968 LINK bdevperf 00:03:42.227 CC examples/nvmf/nvmf/nvmf.o 00:03:42.792 LINK nvmf 00:03:43.369 LINK esnap 00:03:43.688 00:03:43.688 real 0m48.737s 00:03:43.688 user 6m17.123s 00:03:43.688 sys 4m12.199s 00:03:43.688 07:09:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:43.688 07:09:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.688 ************************************ 00:03:43.688 END TEST make 00:03:43.688 ************************************ 00:03:43.688 07:09:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.688 07:09:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.688 07:09:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.688 07:09:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.688 07:09:16 -- pm/common@44 -- $ pid=2384385 00:03:43.688 07:09:16 -- pm/common@50 -- $ kill -TERM 2384385 00:03:43.688 07:09:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.688 07:09:16 -- pm/common@44 -- $ pid=2384387 00:03:43.688 07:09:16 -- pm/common@50 -- $ kill -TERM 2384387 00:03:43.688 07:09:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:43.688 07:09:16 -- pm/common@44 -- $ pid=2384389 00:03:43.688 07:09:16 -- pm/common@50 -- $ kill -TERM 2384389 00:03:43.688 07:09:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:43.688 07:09:16 -- pm/common@44 -- $ pid=2384415 00:03:43.688 07:09:16 -- pm/common@50 -- $ sudo -E kill -TERM 2384415 00:03:43.688 07:09:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.688 07:09:16 -- nvmf/common.sh@7 -- # uname -s 00:03:43.688 07:09:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.688 07:09:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.688 07:09:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.688 07:09:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.688 07:09:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.688 07:09:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.688 07:09:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.688 07:09:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.688 07:09:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.688 07:09:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.688 07:09:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:43.688 07:09:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:43.688 07:09:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.688 07:09:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.688 07:09:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:43.688 07:09:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.688 07:09:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:43.688 07:09:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.688 07:09:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.688 07:09:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.688 07:09:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.688 07:09:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.688 07:09:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.688 07:09:16 -- paths/export.sh@5 -- # export PATH 00:03:43.688 07:09:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.688 07:09:16 -- nvmf/common.sh@47 -- # : 0 00:03:43.688 07:09:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:43.688 07:09:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:43.688 07:09:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.688 07:09:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.688 07:09:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.688 07:09:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:43.688 07:09:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:43.688 07:09:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:43.688 07:09:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.688 07:09:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.688 07:09:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.688 07:09:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.688 07:09:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:43.688 07:09:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.688 07:09:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:43.688 07:09:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.688 07:09:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.688 07:09:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.688 07:09:16 -- spdk/autotest.sh@48 -- # udevadm_pid=2445070 00:03:43.688 07:09:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.688 07:09:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.688 07:09:16 -- pm/common@17 -- # local monitor 00:03:43.688 07:09:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@21 -- # date +%s 00:03:43.688 07:09:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.688 07:09:16 -- pm/common@21 -- # date +%s 00:03:43.688 07:09:16 -- pm/common@25 -- # sleep 1 00:03:43.688 07:09:16 -- pm/common@21 -- # date +%s 00:03:43.688 07:09:16 -- pm/common@21 -- # date +%s 00:03:43.688 07:09:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884156 00:03:43.688 07:09:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884156 00:03:43.688 07:09:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884156 00:03:43.688 07:09:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884156 00:03:43.947 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884156_collect-vmstat.pm.log 00:03:43.947 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884156_collect-cpu-load.pm.log 00:03:43.947 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884156_collect-cpu-temp.pm.log 00:03:43.947 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884156_collect-bmc-pm.bmc.pm.log 00:03:44.880 07:09:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.880 07:09:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.880 07:09:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.880 07:09:17 -- common/autotest_common.sh@10 -- # set +x 00:03:44.880 07:09:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.880 07:09:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:44.880 07:09:17 -- common/autotest_common.sh@10 -- # set +x 00:03:44.880 07:09:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:44.880 07:09:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.880 07:09:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.880 07:09:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:44.880 07:09:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.880 07:09:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.880 07:09:17 -- common/autotest_common.sh@1455 -- # uname 00:03:44.880 07:09:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:44.880 07:09:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.880 07:09:17 -- common/autotest_common.sh@1475 -- # uname 00:03:44.880 07:09:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:44.880 07:09:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:44.880 07:09:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:44.880 07:09:17 -- spdk/autotest.sh@72 -- # hash lcov 00:03:44.880 07:09:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:44.880 07:09:17 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:44.880 --rc lcov_branch_coverage=1 00:03:44.880 --rc lcov_function_coverage=1 00:03:44.880 --rc genhtml_branch_coverage=1 00:03:44.880 --rc genhtml_function_coverage=1 00:03:44.880 --rc genhtml_legend=1 00:03:44.880 --rc geninfo_all_blocks=1 00:03:44.880 ' 00:03:44.880 07:09:17 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:44.880 --rc lcov_branch_coverage=1 00:03:44.880 --rc lcov_function_coverage=1 00:03:44.880 --rc genhtml_branch_coverage=1 00:03:44.880 --rc genhtml_function_coverage=1 00:03:44.880 --rc genhtml_legend=1 00:03:44.880 --rc geninfo_all_blocks=1 00:03:44.880 ' 00:03:44.880 07:09:17 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:44.880 --rc lcov_branch_coverage=1 00:03:44.880 --rc lcov_function_coverage=1 00:03:44.880 --rc genhtml_branch_coverage=1 00:03:44.880 --rc genhtml_function_coverage=1 00:03:44.880 --rc genhtml_legend=1 00:03:44.880 --rc geninfo_all_blocks=1 00:03:44.880 --no-external' 00:03:44.880 07:09:17 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:44.880 --rc lcov_branch_coverage=1 00:03:44.880 --rc lcov_function_coverage=1 00:03:44.880 --rc genhtml_branch_coverage=1 00:03:44.880 --rc genhtml_function_coverage=1 00:03:44.880 --rc genhtml_legend=1 00:03:44.880 --rc geninfo_all_blocks=1 00:03:44.880 --no-external' 00:03:44.880 07:09:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:44.880 lcov: LCOV version 1.14 00:03:44.880 07:09:17 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:46.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:46.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:46.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:46.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:46.513 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:46.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:46.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:46.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:46.772 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:46.772 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:46.772 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:46.772 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:46.772 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:46.772 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:46.772 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:58.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.168 07:09:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:11.168 07:09:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.168 07:09:41 -- common/autotest_common.sh@10 -- # set +x 00:04:11.168 07:09:41 -- spdk/autotest.sh@91 -- # rm -f 00:04:11.168 07:09:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.070 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:13.070 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:13.070 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:13.070 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:13.070 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:13.327 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:13.585 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:13.585 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:13.585 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:13.585 07:09:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:13.585 07:09:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:13.585 07:09:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:13.585 07:09:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:13.585 07:09:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.585 07:09:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:13.585 07:09:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:13.585 07:09:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.585 07:09:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.585 07:09:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:13.585 07:09:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:13.585 07:09:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:13.585 07:09:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:13.585 07:09:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:13.585 07:09:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:13.585 No valid GPT data, bailing 00:04:13.585 07:09:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.585 07:09:45 -- scripts/common.sh@391 -- # pt= 00:04:13.585 07:09:45 -- scripts/common.sh@392 -- # return 1 00:04:13.585 07:09:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:13.585 1+0 records in 00:04:13.585 1+0 records out 00:04:13.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476946 s, 220 MB/s 00:04:13.585 07:09:46 -- spdk/autotest.sh@118 -- # sync 00:04:13.585 07:09:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.585 07:09:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.585 07:09:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.738 07:09:53 -- spdk/autotest.sh@124 -- # uname -s 00:04:21.738 07:09:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:21.738 07:09:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.738 07:09:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.738 07:09:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.738 07:09:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.738 ************************************ 00:04:21.738 START TEST setup.sh 00:04:21.738 ************************************ 00:04:21.738 07:09:53 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.738 * Looking for test storage... 00:04:21.738 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:21.738 07:09:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:21.738 07:09:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:21.738 07:09:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:21.738 07:09:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.738 07:09:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.738 07:09:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.738 ************************************ 00:04:21.738 START TEST acl 00:04:21.738 ************************************ 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:21.738 * Looking for test storage... 00:04:21.738 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.738 07:09:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:21.738 07:09:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:21.738 07:09:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.738 07:09:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.031 07:09:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:25.031 07:09:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:25.031 07:09:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.031 07:09:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:25.031 07:09:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.031 07:09:57 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:29.228 Hugepages 00:04:29.228 node hugesize free / total 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 00:04:29.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:29.228 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:29.229 07:10:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:29.229 07:10:01 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.229 07:10:01 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.229 07:10:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.229 ************************************ 00:04:29.229 START TEST denied 00:04:29.229 ************************************ 00:04:29.229 07:10:01 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:29.229 07:10:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:29.229 07:10:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:29.229 07:10:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:29.229 07:10:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.229 07:10:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.423 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.423 07:10:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.993 00:04:39.993 real 0m9.824s 00:04:39.993 user 0m3.120s 00:04:39.993 sys 0m6.068s 00:04:39.993 07:10:11 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.993 07:10:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:39.993 ************************************ 00:04:39.993 END TEST denied 00:04:39.993 ************************************ 00:04:39.993 07:10:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:39.993 07:10:11 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.993 07:10:11 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.993 07:10:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.993 ************************************ 00:04:39.993 START TEST allowed 00:04:39.993 ************************************ 00:04:39.993 07:10:11 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:39.993 07:10:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:39.993 07:10:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:39.993 07:10:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:39.993 07:10:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.993 07:10:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:45.266 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.266 07:10:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:45.266 07:10:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:45.266 07:10:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:45.266 07:10:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.267 07:10:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.583 00:04:49.583 real 0m10.033s 00:04:49.583 user 0m2.793s 00:04:49.583 sys 0m5.524s 00:04:49.583 07:10:21 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.583 07:10:21 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:49.583 ************************************ 00:04:49.583 END TEST allowed 00:04:49.583 ************************************ 00:04:49.583 00:04:49.583 real 0m28.223s 00:04:49.583 user 0m8.768s 00:04:49.583 sys 0m17.280s 00:04:49.583 07:10:21 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.583 07:10:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:49.583 ************************************ 00:04:49.583 END TEST acl 00:04:49.583 ************************************ 00:04:49.583 07:10:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.583 07:10:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.583 07:10:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.583 07:10:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.583 ************************************ 00:04:49.583 START TEST hugepages 00:04:49.583 ************************************ 00:04:49.583 07:10:21 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.583 * Looking for test storage... 00:04:49.583 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.583 07:10:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36850656 kB' 'MemAvailable: 40945120 kB' 'Buffers: 4096 kB' 'Cached: 14785968 kB' 'SwapCached: 0 kB' 'Active: 11612456 kB' 'Inactive: 3699080 kB' 'Active(anon): 11134228 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524824 kB' 'Mapped: 213224 kB' 'Shmem: 10612756 kB' 'KReclaimable: 564648 kB' 'Slab: 1273960 kB' 'SReclaimable: 564648 kB' 'SUnreclaim: 709312 kB' 'KernelStack: 22592 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12625000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.584 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.585 07:10:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:49.586 07:10:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.586 07:10:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.586 07:10:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.586 ************************************ 00:04:49.586 START TEST default_setup 00:04:49.586 ************************************ 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.586 07:10:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:52.878 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.878 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:54.788 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39012236 kB' 'MemAvailable: 43106408 kB' 'Buffers: 4096 kB' 'Cached: 14786108 kB' 'SwapCached: 0 kB' 'Active: 11632724 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154496 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545168 kB' 'Mapped: 213944 kB' 'Shmem: 10612896 kB' 'KReclaimable: 564424 kB' 'Slab: 1272320 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707896 kB' 'KernelStack: 22560 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12646544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.788 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.789 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39010908 kB' 'MemAvailable: 43105148 kB' 'Buffers: 4096 kB' 'Cached: 14786112 kB' 'SwapCached: 0 kB' 'Active: 11631064 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152836 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541948 kB' 'Mapped: 213696 kB' 'Shmem: 10612900 kB' 'KReclaimable: 564424 kB' 'Slab: 1272276 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707852 kB' 'KernelStack: 22592 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12647616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.790 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.791 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39016316 kB' 'MemAvailable: 43110556 kB' 'Buffers: 4096 kB' 'Cached: 14786132 kB' 'SwapCached: 0 kB' 'Active: 11629416 kB' 'Inactive: 3699080 kB' 'Active(anon): 11151188 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542220 kB' 'Mapped: 213352 kB' 'Shmem: 10612920 kB' 'KReclaimable: 564424 kB' 'Slab: 1272276 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707852 kB' 'KernelStack: 22704 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12642456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.792 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.793 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.794 nr_hugepages=1024 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.794 resv_hugepages=0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.794 surplus_hugepages=0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.794 anon_hugepages=0 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39017200 kB' 'MemAvailable: 43111440 kB' 'Buffers: 4096 kB' 'Cached: 14786132 kB' 'SwapCached: 0 kB' 'Active: 11629952 kB' 'Inactive: 3699080 kB' 'Active(anon): 11151724 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542252 kB' 'Mapped: 213344 kB' 'Shmem: 10612920 kB' 'KReclaimable: 564424 kB' 'Slab: 1272276 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707852 kB' 'KernelStack: 22640 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12642476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.794 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.795 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21883944 kB' 'MemUsed: 10708140 kB' 'SwapCached: 0 kB' 'Active: 6378532 kB' 'Inactive: 410828 kB' 'Active(anon): 6101220 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6637660 kB' 'Mapped: 78324 kB' 'AnonPages: 155024 kB' 'Shmem: 5949520 kB' 'KernelStack: 12648 kB' 'PageTables: 5784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 735304 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 356176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.796 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.797 node0=1024 expecting 1024 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.797 00:04:54.797 real 0m5.558s 00:04:54.797 user 0m1.154s 00:04:54.797 sys 0m2.389s 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.797 07:10:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 ************************************ 00:04:54.797 END TEST default_setup 00:04:54.797 ************************************ 00:04:55.057 07:10:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:55.057 07:10:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.057 07:10:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.057 07:10:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.057 ************************************ 00:04:55.057 START TEST per_node_1G_alloc 00:04:55.057 ************************************ 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.057 07:10:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:59.258 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:59.258 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38998372 kB' 'MemAvailable: 43092612 kB' 'Buffers: 4096 kB' 'Cached: 14786272 kB' 'SwapCached: 0 kB' 'Active: 11627896 kB' 'Inactive: 3699080 kB' 'Active(anon): 11149668 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539936 kB' 'Mapped: 212208 kB' 'Shmem: 10613060 kB' 'KReclaimable: 564424 kB' 'Slab: 1271840 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707416 kB' 'KernelStack: 22528 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12630600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.258 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.259 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38998940 kB' 'MemAvailable: 43093180 kB' 'Buffers: 4096 kB' 'Cached: 14786272 kB' 'SwapCached: 0 kB' 'Active: 11628600 kB' 'Inactive: 3699080 kB' 'Active(anon): 11150372 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540652 kB' 'Mapped: 212196 kB' 'Shmem: 10613060 kB' 'KReclaimable: 564424 kB' 'Slab: 1271864 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707440 kB' 'KernelStack: 22528 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12633456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.260 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.261 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38999992 kB' 'MemAvailable: 43094232 kB' 'Buffers: 4096 kB' 'Cached: 14786292 kB' 'SwapCached: 0 kB' 'Active: 11628588 kB' 'Inactive: 3699080 kB' 'Active(anon): 11150360 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540536 kB' 'Mapped: 212196 kB' 'Shmem: 10613080 kB' 'KReclaimable: 564424 kB' 'Slab: 1271884 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707460 kB' 'KernelStack: 22560 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12632896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.262 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.263 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.264 nr_hugepages=1024 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.264 resv_hugepages=0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.264 surplus_hugepages=0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.264 anon_hugepages=0 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38999280 kB' 'MemAvailable: 43093520 kB' 'Buffers: 4096 kB' 'Cached: 14786292 kB' 'SwapCached: 0 kB' 'Active: 11629732 kB' 'Inactive: 3699080 kB' 'Active(anon): 11151504 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541704 kB' 'Mapped: 212720 kB' 'Shmem: 10613080 kB' 'KReclaimable: 564424 kB' 'Slab: 1271868 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707444 kB' 'KernelStack: 22448 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12635916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.264 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22908196 kB' 'MemUsed: 9683888 kB' 'SwapCached: 0 kB' 'Active: 6384832 kB' 'Inactive: 410828 kB' 'Active(anon): 6107520 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6637760 kB' 'Mapped: 77960 kB' 'AnonPages: 161088 kB' 'Shmem: 5949620 kB' 'KernelStack: 12536 kB' 'PageTables: 5064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 734916 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 355788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.266 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.267 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16088512 kB' 'MemUsed: 11614596 kB' 'SwapCached: 0 kB' 'Active: 5249348 kB' 'Inactive: 3288252 kB' 'Active(anon): 5048432 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3288252 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8152668 kB' 'Mapped: 134536 kB' 'AnonPages: 384980 kB' 'Shmem: 4663500 kB' 'KernelStack: 10088 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185296 kB' 'Slab: 536952 kB' 'SReclaimable: 185296 kB' 'SUnreclaim: 351656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.268 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.269 node0=512 expecting 512 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:59.269 node1=512 expecting 512 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:59.269 00:04:59.269 real 0m4.385s 00:04:59.269 user 0m1.613s 00:04:59.269 sys 0m2.851s 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.269 07:10:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.269 ************************************ 00:04:59.269 END TEST per_node_1G_alloc 00:04:59.269 ************************************ 00:04:59.530 07:10:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:59.530 07:10:31 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.530 07:10:31 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.530 07:10:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.530 ************************************ 00:04:59.530 START TEST even_2G_alloc 00:04:59.530 ************************************ 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.530 07:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:03.732 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.732 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.732 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39012968 kB' 'MemAvailable: 43107208 kB' 'Buffers: 4096 kB' 'Cached: 14786448 kB' 'SwapCached: 0 kB' 'Active: 11629876 kB' 'Inactive: 3699080 kB' 'Active(anon): 11151648 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541260 kB' 'Mapped: 212304 kB' 'Shmem: 10613236 kB' 'KReclaimable: 564424 kB' 'Slab: 1272684 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708260 kB' 'KernelStack: 22576 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12631724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39013752 kB' 'MemAvailable: 43107992 kB' 'Buffers: 4096 kB' 'Cached: 14786448 kB' 'SwapCached: 0 kB' 'Active: 11629068 kB' 'Inactive: 3699080 kB' 'Active(anon): 11150840 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540824 kB' 'Mapped: 212208 kB' 'Shmem: 10613236 kB' 'KReclaimable: 564424 kB' 'Slab: 1272664 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708240 kB' 'KernelStack: 22528 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12631740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.734 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39014440 kB' 'MemAvailable: 43108680 kB' 'Buffers: 4096 kB' 'Cached: 14786468 kB' 'SwapCached: 0 kB' 'Active: 11629080 kB' 'Inactive: 3699080 kB' 'Active(anon): 11150852 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540828 kB' 'Mapped: 212208 kB' 'Shmem: 10613256 kB' 'KReclaimable: 564424 kB' 'Slab: 1272664 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708240 kB' 'KernelStack: 22528 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12631764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.736 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.737 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.738 nr_hugepages=1024 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.738 resv_hugepages=0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.738 surplus_hugepages=0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.738 anon_hugepages=0 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.738 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39014440 kB' 'MemAvailable: 43108680 kB' 'Buffers: 4096 kB' 'Cached: 14786488 kB' 'SwapCached: 0 kB' 'Active: 11629096 kB' 'Inactive: 3699080 kB' 'Active(anon): 11150868 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540828 kB' 'Mapped: 212208 kB' 'Shmem: 10613276 kB' 'KReclaimable: 564424 kB' 'Slab: 1272664 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708240 kB' 'KernelStack: 22528 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12631784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.739 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.740 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22911408 kB' 'MemUsed: 9680676 kB' 'SwapCached: 0 kB' 'Active: 6379904 kB' 'Inactive: 410828 kB' 'Active(anon): 6102592 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6637932 kB' 'Mapped: 77788 kB' 'AnonPages: 156028 kB' 'Shmem: 5949792 kB' 'KernelStack: 12568 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 735764 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 356636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.741 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16103032 kB' 'MemUsed: 11600076 kB' 'SwapCached: 0 kB' 'Active: 5249208 kB' 'Inactive: 3288252 kB' 'Active(anon): 5048292 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3288252 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8152676 kB' 'Mapped: 134420 kB' 'AnonPages: 384800 kB' 'Shmem: 4663508 kB' 'KernelStack: 9960 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185296 kB' 'Slab: 536900 kB' 'SReclaimable: 185296 kB' 'SUnreclaim: 351604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.742 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.743 node0=512 expecting 512 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:03.743 node1=512 expecting 512 00:05:03.743 07:10:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.743 00:05:03.743 real 0m4.195s 00:05:03.743 user 0m1.422s 00:05:03.744 sys 0m2.818s 00:05:03.744 07:10:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.744 07:10:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.744 ************************************ 00:05:03.744 END TEST even_2G_alloc 00:05:03.744 ************************************ 00:05:03.744 07:10:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.744 07:10:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.744 07:10:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.744 07:10:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.744 ************************************ 00:05:03.744 START TEST odd_alloc 00:05:03.744 ************************************ 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.744 07:10:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:07.945 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:07.945 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38987292 kB' 'MemAvailable: 43081532 kB' 'Buffers: 4096 kB' 'Cached: 14786608 kB' 'SwapCached: 0 kB' 'Active: 11630664 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152436 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541772 kB' 'Mapped: 212312 kB' 'Shmem: 10613396 kB' 'KReclaimable: 564424 kB' 'Slab: 1272456 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708032 kB' 'KernelStack: 22560 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12632520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.945 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.946 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38990048 kB' 'MemAvailable: 43084288 kB' 'Buffers: 4096 kB' 'Cached: 14786612 kB' 'SwapCached: 0 kB' 'Active: 11630216 kB' 'Inactive: 3699080 kB' 'Active(anon): 11151988 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541988 kB' 'Mapped: 212220 kB' 'Shmem: 10613400 kB' 'KReclaimable: 564424 kB' 'Slab: 1272452 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708028 kB' 'KernelStack: 22560 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12632536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.947 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.948 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38990392 kB' 'MemAvailable: 43084632 kB' 'Buffers: 4096 kB' 'Cached: 14786632 kB' 'SwapCached: 0 kB' 'Active: 11630252 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152024 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541984 kB' 'Mapped: 212220 kB' 'Shmem: 10613420 kB' 'KReclaimable: 564424 kB' 'Slab: 1272452 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708028 kB' 'KernelStack: 22560 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12632556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.949 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.950 nr_hugepages=1025 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.950 resv_hugepages=0 00:05:07.950 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.951 surplus_hugepages=0 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.951 anon_hugepages=0 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38990576 kB' 'MemAvailable: 43084816 kB' 'Buffers: 4096 kB' 'Cached: 14786632 kB' 'SwapCached: 0 kB' 'Active: 11631024 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152796 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542808 kB' 'Mapped: 212220 kB' 'Shmem: 10613420 kB' 'KReclaimable: 564424 kB' 'Slab: 1272452 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708028 kB' 'KernelStack: 22560 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12632576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.951 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.952 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22890112 kB' 'MemUsed: 9701972 kB' 'SwapCached: 0 kB' 'Active: 6381116 kB' 'Inactive: 410828 kB' 'Active(anon): 6103804 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6638080 kB' 'Mapped: 77788 kB' 'AnonPages: 157096 kB' 'Shmem: 5949940 kB' 'KernelStack: 12632 kB' 'PageTables: 5224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 735336 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 356208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.953 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16110420 kB' 'MemUsed: 11592688 kB' 'SwapCached: 0 kB' 'Active: 5249880 kB' 'Inactive: 3288252 kB' 'Active(anon): 5048964 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3288252 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8152688 kB' 'Mapped: 134432 kB' 'AnonPages: 385572 kB' 'Shmem: 4663520 kB' 'KernelStack: 9976 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185296 kB' 'Slab: 537092 kB' 'SReclaimable: 185296 kB' 'SUnreclaim: 351796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.954 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:07.955 node0=512 expecting 513 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:07.955 node1=513 expecting 512 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:07.955 00:05:07.955 real 0m4.063s 00:05:07.955 user 0m1.491s 00:05:07.955 sys 0m2.613s 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.955 07:10:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.955 ************************************ 00:05:07.955 END TEST odd_alloc 00:05:07.955 ************************************ 00:05:07.955 07:10:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.955 07:10:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.955 07:10:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.955 07:10:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.955 ************************************ 00:05:07.955 START TEST custom_alloc 00:05:07.955 ************************************ 00:05:07.955 07:10:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:07.955 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.956 07:10:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:12.158 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:12.158 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37968548 kB' 'MemAvailable: 42062788 kB' 'Buffers: 4096 kB' 'Cached: 14786780 kB' 'SwapCached: 0 kB' 'Active: 11630372 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152144 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541900 kB' 'Mapped: 212328 kB' 'Shmem: 10613568 kB' 'KReclaimable: 564424 kB' 'Slab: 1272264 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707840 kB' 'KernelStack: 22592 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12636188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220660 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37969732 kB' 'MemAvailable: 42063972 kB' 'Buffers: 4096 kB' 'Cached: 14786784 kB' 'SwapCached: 0 kB' 'Active: 11630400 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152172 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541904 kB' 'Mapped: 212248 kB' 'Shmem: 10613572 kB' 'KReclaimable: 564424 kB' 'Slab: 1272216 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707792 kB' 'KernelStack: 22512 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12636204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220660 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37972936 kB' 'MemAvailable: 42067176 kB' 'Buffers: 4096 kB' 'Cached: 14786788 kB' 'SwapCached: 0 kB' 'Active: 11630344 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152116 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541852 kB' 'Mapped: 212248 kB' 'Shmem: 10613576 kB' 'KReclaimable: 564424 kB' 'Slab: 1272216 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707792 kB' 'KernelStack: 22592 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12634608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.162 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:12.164 nr_hugepages=1536 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.164 resv_hugepages=0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.164 surplus_hugepages=0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.164 anon_hugepages=0 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37973380 kB' 'MemAvailable: 42067620 kB' 'Buffers: 4096 kB' 'Cached: 14786788 kB' 'SwapCached: 0 kB' 'Active: 11630656 kB' 'Inactive: 3699080 kB' 'Active(anon): 11152428 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542164 kB' 'Mapped: 212248 kB' 'Shmem: 10613576 kB' 'KReclaimable: 564424 kB' 'Slab: 1272216 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 707792 kB' 'KernelStack: 22640 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12636248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.164 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.165 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22896456 kB' 'MemUsed: 9695628 kB' 'SwapCached: 0 kB' 'Active: 6377756 kB' 'Inactive: 410828 kB' 'Active(anon): 6100444 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6638184 kB' 'Mapped: 77808 kB' 'AnonPages: 153572 kB' 'Shmem: 5950044 kB' 'KernelStack: 12520 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 734888 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 355760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.166 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.167 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15076756 kB' 'MemUsed: 12626352 kB' 'SwapCached: 0 kB' 'Active: 5253548 kB' 'Inactive: 3288252 kB' 'Active(anon): 5052632 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3288252 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8152740 kB' 'Mapped: 134440 kB' 'AnonPages: 389200 kB' 'Shmem: 4663572 kB' 'KernelStack: 10152 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185296 kB' 'Slab: 537328 kB' 'SReclaimable: 185296 kB' 'SUnreclaim: 352032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.168 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.169 node0=512 expecting 512 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:12.169 node1=1024 expecting 1024 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:12.169 00:05:12.169 real 0m3.973s 00:05:12.169 user 0m1.466s 00:05:12.169 sys 0m2.563s 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.169 07:10:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.169 ************************************ 00:05:12.169 END TEST custom_alloc 00:05:12.169 ************************************ 00:05:12.169 07:10:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:12.169 07:10:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.169 07:10:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.169 07:10:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.169 ************************************ 00:05:12.169 START TEST no_shrink_alloc 00:05:12.169 ************************************ 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.169 07:10:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:15.508 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:15.508 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39013152 kB' 'MemAvailable: 43107392 kB' 'Buffers: 4096 kB' 'Cached: 14786952 kB' 'SwapCached: 0 kB' 'Active: 11632260 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154032 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542948 kB' 'Mapped: 213452 kB' 'Shmem: 10613740 kB' 'KReclaimable: 564424 kB' 'Slab: 1272836 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708412 kB' 'KernelStack: 22608 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12668564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220900 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.508 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.509 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39013968 kB' 'MemAvailable: 43108208 kB' 'Buffers: 4096 kB' 'Cached: 14786956 kB' 'SwapCached: 0 kB' 'Active: 11632468 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154240 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543656 kB' 'Mapped: 213864 kB' 'Shmem: 10613744 kB' 'KReclaimable: 564424 kB' 'Slab: 1272832 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708408 kB' 'KernelStack: 22624 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12669940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.510 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.511 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39012624 kB' 'MemAvailable: 43106864 kB' 'Buffers: 4096 kB' 'Cached: 14786972 kB' 'SwapCached: 0 kB' 'Active: 11636984 kB' 'Inactive: 3699080 kB' 'Active(anon): 11158756 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548740 kB' 'Mapped: 213864 kB' 'Shmem: 10613760 kB' 'KReclaimable: 564424 kB' 'Slab: 1272832 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708408 kB' 'KernelStack: 22624 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12674724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220840 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.512 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.513 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.514 nr_hugepages=1024 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.514 resv_hugepages=0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.514 surplus_hugepages=0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.514 anon_hugepages=0 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.514 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39015040 kB' 'MemAvailable: 43109280 kB' 'Buffers: 4096 kB' 'Cached: 14786996 kB' 'SwapCached: 0 kB' 'Active: 11632536 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154308 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543748 kB' 'Mapped: 214144 kB' 'Shmem: 10613784 kB' 'KReclaimable: 564424 kB' 'Slab: 1272832 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708408 kB' 'KernelStack: 22624 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12669984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220836 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.515 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.516 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21863072 kB' 'MemUsed: 10729012 kB' 'SwapCached: 0 kB' 'Active: 6379260 kB' 'Inactive: 410828 kB' 'Active(anon): 6101948 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6638328 kB' 'Mapped: 78848 kB' 'AnonPages: 154880 kB' 'Shmem: 5950188 kB' 'KernelStack: 12600 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 735688 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 356560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.517 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.777 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.777 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.777 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.777 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.777 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.778 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.779 node0=1024 expecting 1024 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.779 07:10:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:19.980 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:19.980 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:19.981 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:19.981 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:19.981 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39019884 kB' 'MemAvailable: 43114124 kB' 'Buffers: 4096 kB' 'Cached: 14787092 kB' 'SwapCached: 0 kB' 'Active: 11632716 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154488 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543836 kB' 'Mapped: 213384 kB' 'Shmem: 10613880 kB' 'KReclaimable: 564424 kB' 'Slab: 1272900 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708476 kB' 'KernelStack: 22576 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12671984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.981 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39020756 kB' 'MemAvailable: 43114996 kB' 'Buffers: 4096 kB' 'Cached: 14787108 kB' 'SwapCached: 0 kB' 'Active: 11632056 kB' 'Inactive: 3699080 kB' 'Active(anon): 11153828 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543768 kB' 'Mapped: 213368 kB' 'Shmem: 10613896 kB' 'KReclaimable: 564424 kB' 'Slab: 1272916 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708492 kB' 'KernelStack: 22624 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12672200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.982 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.983 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39020076 kB' 'MemAvailable: 43114316 kB' 'Buffers: 4096 kB' 'Cached: 14787108 kB' 'SwapCached: 0 kB' 'Active: 11632588 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154360 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543784 kB' 'Mapped: 213368 kB' 'Shmem: 10613896 kB' 'KReclaimable: 564424 kB' 'Slab: 1272916 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708492 kB' 'KernelStack: 22704 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12672384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.984 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.985 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.986 nr_hugepages=1024 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.986 resv_hugepages=0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.986 surplus_hugepages=0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.986 anon_hugepages=0 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.986 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 39019824 kB' 'MemAvailable: 43114064 kB' 'Buffers: 4096 kB' 'Cached: 14787108 kB' 'SwapCached: 0 kB' 'Active: 11632708 kB' 'Inactive: 3699080 kB' 'Active(anon): 11154480 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543936 kB' 'Mapped: 213368 kB' 'Shmem: 10613896 kB' 'KReclaimable: 564424 kB' 'Slab: 1272916 kB' 'SReclaimable: 564424 kB' 'SUnreclaim: 708492 kB' 'KernelStack: 22768 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12670792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.987 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21880960 kB' 'MemUsed: 10711124 kB' 'SwapCached: 0 kB' 'Active: 6380344 kB' 'Inactive: 410828 kB' 'Active(anon): 6103032 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6638468 kB' 'Mapped: 78848 kB' 'AnonPages: 156004 kB' 'Shmem: 5950328 kB' 'KernelStack: 12600 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379128 kB' 'Slab: 735800 kB' 'SReclaimable: 379128 kB' 'SUnreclaim: 356672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.990 node0=1024 expecting 1024 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.990 00:05:19.990 real 0m7.662s 00:05:19.990 user 0m2.683s 00:05:19.990 sys 0m4.949s 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.990 07:10:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.990 ************************************ 00:05:19.990 END TEST no_shrink_alloc 00:05:19.990 ************************************ 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.990 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.990 00:05:19.990 real 0m30.488s 00:05:19.990 user 0m10.071s 00:05:19.990 sys 0m18.640s 00:05:19.990 07:10:52 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.990 07:10:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.990 ************************************ 00:05:19.990 END TEST hugepages 00:05:19.990 ************************************ 00:05:19.990 07:10:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:19.990 07:10:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.990 07:10:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.990 07:10:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.990 ************************************ 00:05:19.990 START TEST driver 00:05:19.990 ************************************ 00:05:19.990 07:10:52 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:19.990 * Looking for test storage... 00:05:19.990 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:19.990 07:10:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:19.990 07:10:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.990 07:10:52 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.268 07:10:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:25.268 07:10:57 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.268 07:10:57 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.268 07:10:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:25.268 ************************************ 00:05:25.268 START TEST guess_driver 00:05:25.268 ************************************ 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:25.268 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:25.268 Looking for driver=vfio-pci 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.268 07:10:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.506 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.888 07:11:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:30.888 07:11:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:30.888 07:11:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.146 07:11:03 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:31.146 07:11:03 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:31.146 07:11:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.146 07:11:03 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.418 00:05:36.418 real 0m11.238s 00:05:36.418 user 0m2.891s 00:05:36.418 sys 0m5.723s 00:05:36.418 07:11:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.418 07:11:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 ************************************ 00:05:36.418 END TEST guess_driver 00:05:36.418 ************************************ 00:05:36.418 00:05:36.418 real 0m16.647s 00:05:36.418 user 0m4.528s 00:05:36.418 sys 0m8.726s 00:05:36.418 07:11:08 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.418 07:11:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 ************************************ 00:05:36.418 END TEST driver 00:05:36.418 ************************************ 00:05:36.418 07:11:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:36.418 07:11:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.418 07:11:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.418 07:11:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 ************************************ 00:05:36.418 START TEST devices 00:05:36.418 ************************************ 00:05:36.418 07:11:08 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:36.418 * Looking for test storage... 00:05:36.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:36.418 07:11:08 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:36.418 07:11:08 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:36.418 07:11:08 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.418 07:11:08 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.706 07:11:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:41.706 07:11:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:41.707 07:11:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:41.707 No valid GPT data, bailing 00:05:41.707 07:11:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.707 07:11:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:41.707 07:11:13 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.707 07:11:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.707 ************************************ 00:05:41.707 START TEST nvme_mount 00:05:41.707 ************************************ 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:41.707 07:11:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:41.993 Creating new GPT entries in memory. 00:05:41.993 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.993 other utilities. 00:05:41.993 07:11:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.993 07:11:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.993 07:11:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.993 07:11:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.993 07:11:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:42.931 Creating new GPT entries in memory. 00:05:42.931 The operation has completed successfully. 00:05:42.931 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.931 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.931 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2484713 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:42.932 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.191 07:11:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:47.386 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.386 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.386 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:47.387 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:47.387 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:47.387 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.387 07:11:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:50.678 07:11:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.678 07:11:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:54.875 07:11:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.875 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.875 00:05:54.875 real 0m13.724s 00:05:54.875 user 0m3.950s 00:05:54.875 sys 0m7.634s 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.875 07:11:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:54.875 ************************************ 00:05:54.875 END TEST nvme_mount 00:05:54.875 ************************************ 00:05:54.875 07:11:27 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:54.875 07:11:27 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.875 07:11:27 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.875 07:11:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.875 ************************************ 00:05:54.875 START TEST dm_mount 00:05:54.875 ************************************ 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:54.876 07:11:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:55.816 Creating new GPT entries in memory. 00:05:55.816 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:55.816 other utilities. 00:05:55.816 07:11:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:55.816 07:11:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.816 07:11:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:55.816 07:11:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:55.816 07:11:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:56.754 Creating new GPT entries in memory. 00:05:56.754 The operation has completed successfully. 00:05:56.754 07:11:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:56.754 07:11:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.754 07:11:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:56.754 07:11:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:56.754 07:11:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:57.692 The operation has completed successfully. 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2489815 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:57.692 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.951 07:11:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:01.242 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.501 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:01.501 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:01.501 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.502 07:11:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.693 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:05.694 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:05.694 00:06:05.694 real 0m10.830s 00:06:05.694 user 0m2.652s 00:06:05.694 sys 0m5.231s 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.694 07:11:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:05.694 ************************************ 00:06:05.694 END TEST dm_mount 00:06:05.694 ************************************ 00:06:05.694 07:11:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:05.694 07:11:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:05.694 07:11:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:05.694 07:11:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.694 07:11:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:05.694 07:11:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:05.694 07:11:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:05.956 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:05.956 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:06:05.956 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:05.956 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:05.956 07:11:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:05.956 00:06:05.956 real 0m29.442s 00:06:05.956 user 0m8.237s 00:06:05.956 sys 0m16.015s 00:06:05.956 07:11:38 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.956 07:11:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:05.956 ************************************ 00:06:05.956 END TEST devices 00:06:05.956 ************************************ 00:06:05.956 00:06:05.956 real 1m45.219s 00:06:05.956 user 0m31.754s 00:06:05.956 sys 1m0.964s 00:06:05.956 07:11:38 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.956 07:11:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:05.956 ************************************ 00:06:05.956 END TEST setup.sh 00:06:05.956 ************************************ 00:06:05.956 07:11:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:09.326 Hugepages 00:06:09.326 node hugesize free / total 00:06:09.326 node0 1048576kB 0 / 0 00:06:09.326 node0 2048kB 2048 / 2048 00:06:09.326 node1 1048576kB 0 / 0 00:06:09.326 node1 2048kB 0 / 0 00:06:09.326 00:06:09.326 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:09.326 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:09.326 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:09.326 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:09.326 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:09.326 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:09.585 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:09.585 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:09.585 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:09.585 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:09.585 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:09.585 07:11:42 -- spdk/autotest.sh@130 -- # uname -s 00:06:09.585 07:11:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:09.585 07:11:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:09.585 07:11:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:13.793 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:13.793 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:15.701 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:15.701 07:11:48 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:17.082 07:11:49 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:17.082 07:11:49 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:17.082 07:11:49 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.082 07:11:49 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:17.082 07:11:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:17.082 07:11:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:17.082 07:11:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.082 07:11:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:17.082 07:11:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:17.082 07:11:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:17.082 07:11:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:17.082 07:11:49 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:21.275 Waiting for block devices as requested 00:06:21.275 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:21.275 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:21.534 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:21.534 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:21.534 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:21.793 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:21.794 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:21.794 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:22.053 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:22.053 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:22.053 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:22.313 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:22.313 07:11:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:22.313 07:11:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:22.313 07:11:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:22.313 07:11:54 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:06:22.313 07:11:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:22.313 07:11:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:22.313 07:11:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:22.313 07:11:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:22.573 07:11:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:22.573 07:11:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:22.573 07:11:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:22.573 07:11:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:22.573 07:11:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:22.573 07:11:54 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:22.573 07:11:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:22.573 07:11:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:22.573 07:11:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:22.573 07:11:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:22.573 07:11:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:22.573 07:11:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:22.573 07:11:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:22.573 07:11:54 -- common/autotest_common.sh@1557 -- # continue 00:06:22.573 07:11:54 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:22.573 07:11:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.573 07:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:22.573 07:11:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:22.573 07:11:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.573 07:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:22.573 07:11:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:26.768 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:26.768 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:26.769 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:28.672 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:28.672 07:12:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:28.672 07:12:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.672 07:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:28.672 07:12:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:28.672 07:12:00 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:28.672 07:12:00 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:28.672 07:12:01 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:28.672 07:12:01 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:28.672 07:12:01 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:28.672 07:12:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:28.672 07:12:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:28.672 07:12:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:28.672 07:12:01 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:28.672 07:12:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:28.672 07:12:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:28.673 07:12:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:28.673 07:12:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:28.673 07:12:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:28.673 07:12:01 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:28.673 07:12:01 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:28.673 07:12:01 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:28.673 07:12:01 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:06:28.673 07:12:01 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:06:28.673 07:12:01 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2501279 00:06:28.673 07:12:01 -- common/autotest_common.sh@1598 -- # waitforlisten 2501279 00:06:28.673 07:12:01 -- common/autotest_common.sh@831 -- # '[' -z 2501279 ']' 00:06:28.673 07:12:01 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.673 07:12:01 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.673 07:12:01 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.673 07:12:01 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.673 07:12:01 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.673 07:12:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.673 [2024-07-25 07:12:01.184783] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:28.673 [2024-07-25 07:12:01.184837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501279 ] 00:06:28.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.931 [2024-07-25 07:12:01.268904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.931 [2024-07-25 07:12:01.344063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.500 07:12:01 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.500 07:12:01 -- common/autotest_common.sh@864 -- # return 0 00:06:29.500 07:12:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:29.500 07:12:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:29.500 07:12:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:32.856 nvme0n1 00:06:32.857 07:12:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:32.857 [2024-07-25 07:12:05.147522] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:32.857 request: 00:06:32.857 { 00:06:32.857 "nvme_ctrlr_name": "nvme0", 00:06:32.857 "password": "test", 00:06:32.857 "method": "bdev_nvme_opal_revert", 00:06:32.857 "req_id": 1 00:06:32.857 } 00:06:32.857 Got JSON-RPC error response 00:06:32.857 response: 00:06:32.857 { 00:06:32.857 "code": -32602, 00:06:32.857 "message": "Invalid parameters" 00:06:32.857 } 00:06:32.857 07:12:05 -- common/autotest_common.sh@1604 -- # true 00:06:32.857 07:12:05 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:32.857 07:12:05 -- common/autotest_common.sh@1608 -- # killprocess 2501279 00:06:32.857 07:12:05 -- common/autotest_common.sh@950 -- # '[' -z 2501279 ']' 00:06:32.857 07:12:05 -- common/autotest_common.sh@954 -- # kill -0 2501279 00:06:32.857 07:12:05 -- common/autotest_common.sh@955 -- # uname 00:06:32.857 07:12:05 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.857 07:12:05 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2501279 00:06:32.857 07:12:05 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.857 07:12:05 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.857 07:12:05 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2501279' 00:06:32.857 killing process with pid 2501279 00:06:32.857 07:12:05 -- common/autotest_common.sh@969 -- # kill 2501279 00:06:32.857 07:12:05 -- common/autotest_common.sh@974 -- # wait 2501279 00:06:35.392 07:12:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:35.392 07:12:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:35.392 07:12:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:35.392 07:12:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:35.392 07:12:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:35.392 07:12:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.392 07:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:35.392 07:12:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:35.392 07:12:07 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:35.392 07:12:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.392 07:12:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.392 07:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:35.392 ************************************ 00:06:35.392 START TEST env 00:06:35.392 ************************************ 00:06:35.392 07:12:07 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:35.652 * Looking for test storage... 00:06:35.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:35.652 07:12:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:35.652 07:12:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.652 07:12:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.652 07:12:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.652 ************************************ 00:06:35.652 START TEST env_memory 00:06:35.652 ************************************ 00:06:35.652 07:12:08 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:35.652 00:06:35.652 00:06:35.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.652 http://cunit.sourceforge.net/ 00:06:35.652 00:06:35.652 00:06:35.652 Suite: memory 00:06:35.652 Test: alloc and free memory map ...[2024-07-25 07:12:08.051737] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:35.652 passed 00:06:35.652 Test: mem map translation ...[2024-07-25 07:12:08.071117] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:35.652 [2024-07-25 07:12:08.071136] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:35.652 [2024-07-25 07:12:08.071174] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:35.652 [2024-07-25 07:12:08.071183] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:35.652 passed 00:06:35.652 Test: mem map registration ...[2024-07-25 07:12:08.107459] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:35.652 [2024-07-25 07:12:08.107476] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:35.652 passed 00:06:35.652 Test: mem map adjacent registrations ...passed 00:06:35.652 00:06:35.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.652 suites 1 1 n/a 0 0 00:06:35.652 tests 4 4 4 0 0 00:06:35.652 asserts 152 152 152 0 n/a 00:06:35.652 00:06:35.652 Elapsed time = 0.140 seconds 00:06:35.652 00:06:35.652 real 0m0.153s 00:06:35.652 user 0m0.145s 00:06:35.652 sys 0m0.008s 00:06:35.652 07:12:08 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.652 07:12:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:35.652 ************************************ 00:06:35.652 END TEST env_memory 00:06:35.652 ************************************ 00:06:35.912 07:12:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:35.912 07:12:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.912 07:12:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.912 07:12:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.912 ************************************ 00:06:35.912 START TEST env_vtophys 00:06:35.912 ************************************ 00:06:35.912 07:12:08 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:35.912 EAL: lib.eal log level changed from notice to debug 00:06:35.912 EAL: Detected lcore 0 as core 0 on socket 0 00:06:35.912 EAL: Detected lcore 1 as core 1 on socket 0 00:06:35.912 EAL: Detected lcore 2 as core 2 on socket 0 00:06:35.912 EAL: Detected lcore 3 as core 3 on socket 0 00:06:35.912 EAL: Detected lcore 4 as core 4 on socket 0 00:06:35.912 EAL: Detected lcore 5 as core 5 on socket 0 00:06:35.912 EAL: Detected lcore 6 as core 6 on socket 0 00:06:35.912 EAL: Detected lcore 7 as core 8 on socket 0 00:06:35.912 EAL: Detected lcore 8 as core 9 on socket 0 00:06:35.912 EAL: Detected lcore 9 as core 10 on socket 0 00:06:35.912 EAL: Detected lcore 10 as core 11 on socket 0 00:06:35.912 EAL: Detected lcore 11 as core 12 on socket 0 00:06:35.912 EAL: Detected lcore 12 as core 13 on socket 0 00:06:35.912 EAL: Detected lcore 13 as core 14 on socket 0 00:06:35.912 EAL: Detected lcore 14 as core 16 on socket 0 00:06:35.912 EAL: Detected lcore 15 as core 17 on socket 0 00:06:35.912 EAL: Detected lcore 16 as core 18 on socket 0 00:06:35.912 EAL: Detected lcore 17 as core 19 on socket 0 00:06:35.912 EAL: Detected lcore 18 as core 20 on socket 0 00:06:35.912 EAL: Detected lcore 19 as core 21 on socket 0 00:06:35.912 EAL: Detected lcore 20 as core 22 on socket 0 00:06:35.912 EAL: Detected lcore 21 as core 24 on socket 0 00:06:35.912 EAL: Detected lcore 22 as core 25 on socket 0 00:06:35.912 EAL: Detected lcore 23 as core 26 on socket 0 00:06:35.912 EAL: Detected lcore 24 as core 27 on socket 0 00:06:35.912 EAL: Detected lcore 25 as core 28 on socket 0 00:06:35.912 EAL: Detected lcore 26 as core 29 on socket 0 00:06:35.912 EAL: Detected lcore 27 as core 30 on socket 0 00:06:35.912 EAL: Detected lcore 28 as core 0 on socket 1 00:06:35.912 EAL: Detected lcore 29 as core 1 on socket 1 00:06:35.912 EAL: Detected lcore 30 as core 2 on socket 1 00:06:35.912 EAL: Detected lcore 31 as core 3 on socket 1 00:06:35.912 EAL: Detected lcore 32 as core 4 on socket 1 00:06:35.912 EAL: Detected lcore 33 as core 5 on socket 1 00:06:35.912 EAL: Detected lcore 34 as core 6 on socket 1 00:06:35.913 EAL: Detected lcore 35 as core 8 on socket 1 00:06:35.913 EAL: Detected lcore 36 as core 9 on socket 1 00:06:35.913 EAL: Detected lcore 37 as core 10 on socket 1 00:06:35.913 EAL: Detected lcore 38 as core 11 on socket 1 00:06:35.913 EAL: Detected lcore 39 as core 12 on socket 1 00:06:35.913 EAL: Detected lcore 40 as core 13 on socket 1 00:06:35.913 EAL: Detected lcore 41 as core 14 on socket 1 00:06:35.913 EAL: Detected lcore 42 as core 16 on socket 1 00:06:35.913 EAL: Detected lcore 43 as core 17 on socket 1 00:06:35.913 EAL: Detected lcore 44 as core 18 on socket 1 00:06:35.913 EAL: Detected lcore 45 as core 19 on socket 1 00:06:35.913 EAL: Detected lcore 46 as core 20 on socket 1 00:06:35.913 EAL: Detected lcore 47 as core 21 on socket 1 00:06:35.913 EAL: Detected lcore 48 as core 22 on socket 1 00:06:35.913 EAL: Detected lcore 49 as core 24 on socket 1 00:06:35.913 EAL: Detected lcore 50 as core 25 on socket 1 00:06:35.913 EAL: Detected lcore 51 as core 26 on socket 1 00:06:35.913 EAL: Detected lcore 52 as core 27 on socket 1 00:06:35.913 EAL: Detected lcore 53 as core 28 on socket 1 00:06:35.913 EAL: Detected lcore 54 as core 29 on socket 1 00:06:35.913 EAL: Detected lcore 55 as core 30 on socket 1 00:06:35.913 EAL: Detected lcore 56 as core 0 on socket 0 00:06:35.913 EAL: Detected lcore 57 as core 1 on socket 0 00:06:35.913 EAL: Detected lcore 58 as core 2 on socket 0 00:06:35.913 EAL: Detected lcore 59 as core 3 on socket 0 00:06:35.913 EAL: Detected lcore 60 as core 4 on socket 0 00:06:35.913 EAL: Detected lcore 61 as core 5 on socket 0 00:06:35.913 EAL: Detected lcore 62 as core 6 on socket 0 00:06:35.913 EAL: Detected lcore 63 as core 8 on socket 0 00:06:35.913 EAL: Detected lcore 64 as core 9 on socket 0 00:06:35.913 EAL: Detected lcore 65 as core 10 on socket 0 00:06:35.913 EAL: Detected lcore 66 as core 11 on socket 0 00:06:35.913 EAL: Detected lcore 67 as core 12 on socket 0 00:06:35.913 EAL: Detected lcore 68 as core 13 on socket 0 00:06:35.913 EAL: Detected lcore 69 as core 14 on socket 0 00:06:35.913 EAL: Detected lcore 70 as core 16 on socket 0 00:06:35.913 EAL: Detected lcore 71 as core 17 on socket 0 00:06:35.913 EAL: Detected lcore 72 as core 18 on socket 0 00:06:35.913 EAL: Detected lcore 73 as core 19 on socket 0 00:06:35.913 EAL: Detected lcore 74 as core 20 on socket 0 00:06:35.913 EAL: Detected lcore 75 as core 21 on socket 0 00:06:35.913 EAL: Detected lcore 76 as core 22 on socket 0 00:06:35.913 EAL: Detected lcore 77 as core 24 on socket 0 00:06:35.913 EAL: Detected lcore 78 as core 25 on socket 0 00:06:35.913 EAL: Detected lcore 79 as core 26 on socket 0 00:06:35.913 EAL: Detected lcore 80 as core 27 on socket 0 00:06:35.913 EAL: Detected lcore 81 as core 28 on socket 0 00:06:35.913 EAL: Detected lcore 82 as core 29 on socket 0 00:06:35.913 EAL: Detected lcore 83 as core 30 on socket 0 00:06:35.913 EAL: Detected lcore 84 as core 0 on socket 1 00:06:35.913 EAL: Detected lcore 85 as core 1 on socket 1 00:06:35.913 EAL: Detected lcore 86 as core 2 on socket 1 00:06:35.913 EAL: Detected lcore 87 as core 3 on socket 1 00:06:35.913 EAL: Detected lcore 88 as core 4 on socket 1 00:06:35.913 EAL: Detected lcore 89 as core 5 on socket 1 00:06:35.913 EAL: Detected lcore 90 as core 6 on socket 1 00:06:35.913 EAL: Detected lcore 91 as core 8 on socket 1 00:06:35.913 EAL: Detected lcore 92 as core 9 on socket 1 00:06:35.913 EAL: Detected lcore 93 as core 10 on socket 1 00:06:35.913 EAL: Detected lcore 94 as core 11 on socket 1 00:06:35.913 EAL: Detected lcore 95 as core 12 on socket 1 00:06:35.913 EAL: Detected lcore 96 as core 13 on socket 1 00:06:35.913 EAL: Detected lcore 97 as core 14 on socket 1 00:06:35.913 EAL: Detected lcore 98 as core 16 on socket 1 00:06:35.913 EAL: Detected lcore 99 as core 17 on socket 1 00:06:35.913 EAL: Detected lcore 100 as core 18 on socket 1 00:06:35.913 EAL: Detected lcore 101 as core 19 on socket 1 00:06:35.913 EAL: Detected lcore 102 as core 20 on socket 1 00:06:35.913 EAL: Detected lcore 103 as core 21 on socket 1 00:06:35.913 EAL: Detected lcore 104 as core 22 on socket 1 00:06:35.913 EAL: Detected lcore 105 as core 24 on socket 1 00:06:35.913 EAL: Detected lcore 106 as core 25 on socket 1 00:06:35.913 EAL: Detected lcore 107 as core 26 on socket 1 00:06:35.913 EAL: Detected lcore 108 as core 27 on socket 1 00:06:35.913 EAL: Detected lcore 109 as core 28 on socket 1 00:06:35.913 EAL: Detected lcore 110 as core 29 on socket 1 00:06:35.913 EAL: Detected lcore 111 as core 30 on socket 1 00:06:35.913 EAL: Maximum logical cores by configuration: 128 00:06:35.913 EAL: Detected CPU lcores: 112 00:06:35.913 EAL: Detected NUMA nodes: 2 00:06:35.913 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:35.913 EAL: Detected shared linkage of DPDK 00:06:35.913 EAL: No shared files mode enabled, IPC will be disabled 00:06:35.913 EAL: Bus pci wants IOVA as 'DC' 00:06:35.913 EAL: Buses did not request a specific IOVA mode. 00:06:35.913 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:35.913 EAL: Selected IOVA mode 'VA' 00:06:35.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.913 EAL: Probing VFIO support... 00:06:35.913 EAL: IOMMU type 1 (Type 1) is supported 00:06:35.913 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:35.913 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:35.913 EAL: VFIO support initialized 00:06:35.913 EAL: Ask a virtual area of 0x2e000 bytes 00:06:35.913 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:35.913 EAL: Setting up physically contiguous memory... 00:06:35.913 EAL: Setting maximum number of open files to 524288 00:06:35.913 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:35.913 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:35.913 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:35.913 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:35.913 EAL: Ask a virtual area of 0x61000 bytes 00:06:35.913 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:35.913 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:35.913 EAL: Ask a virtual area of 0x400000000 bytes 00:06:35.913 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:35.913 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:35.913 EAL: Hugepages will be freed exactly as allocated. 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: TSC frequency is ~2500000 KHz 00:06:35.913 EAL: Main lcore 0 is ready (tid=7f5a9b915a00;cpuset=[0]) 00:06:35.913 EAL: Trying to obtain current memory policy. 00:06:35.913 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.913 EAL: Restoring previous memory policy: 0 00:06:35.913 EAL: request: mp_malloc_sync 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: Heap on socket 0 was expanded by 2MB 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:35.913 EAL: Mem event callback 'spdk:(nil)' registered 00:06:35.913 00:06:35.913 00:06:35.913 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.913 http://cunit.sourceforge.net/ 00:06:35.913 00:06:35.913 00:06:35.913 Suite: components_suite 00:06:35.913 Test: vtophys_malloc_test ...passed 00:06:35.913 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:35.913 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.913 EAL: Restoring previous memory policy: 4 00:06:35.913 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.913 EAL: request: mp_malloc_sync 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: Heap on socket 0 was expanded by 4MB 00:06:35.913 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.913 EAL: request: mp_malloc_sync 00:06:35.913 EAL: No shared files mode enabled, IPC is disabled 00:06:35.913 EAL: Heap on socket 0 was shrunk by 4MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 6MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was shrunk by 6MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 10MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was shrunk by 10MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 18MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was shrunk by 18MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 34MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was shrunk by 34MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 66MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was shrunk by 66MB 00:06:35.914 EAL: Trying to obtain current memory policy. 00:06:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.914 EAL: Restoring previous memory policy: 4 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.914 EAL: request: mp_malloc_sync 00:06:35.914 EAL: No shared files mode enabled, IPC is disabled 00:06:35.914 EAL: Heap on socket 0 was expanded by 130MB 00:06:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.174 EAL: request: mp_malloc_sync 00:06:36.174 EAL: No shared files mode enabled, IPC is disabled 00:06:36.174 EAL: Heap on socket 0 was shrunk by 130MB 00:06:36.174 EAL: Trying to obtain current memory policy. 00:06:36.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:36.174 EAL: Restoring previous memory policy: 4 00:06:36.174 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.174 EAL: request: mp_malloc_sync 00:06:36.174 EAL: No shared files mode enabled, IPC is disabled 00:06:36.174 EAL: Heap on socket 0 was expanded by 258MB 00:06:36.174 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.174 EAL: request: mp_malloc_sync 00:06:36.174 EAL: No shared files mode enabled, IPC is disabled 00:06:36.174 EAL: Heap on socket 0 was shrunk by 258MB 00:06:36.174 EAL: Trying to obtain current memory policy. 00:06:36.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:36.174 EAL: Restoring previous memory policy: 4 00:06:36.174 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.174 EAL: request: mp_malloc_sync 00:06:36.174 EAL: No shared files mode enabled, IPC is disabled 00:06:36.174 EAL: Heap on socket 0 was expanded by 514MB 00:06:36.433 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.433 EAL: request: mp_malloc_sync 00:06:36.433 EAL: No shared files mode enabled, IPC is disabled 00:06:36.433 EAL: Heap on socket 0 was shrunk by 514MB 00:06:36.433 EAL: Trying to obtain current memory policy. 00:06:36.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:36.692 EAL: Restoring previous memory policy: 4 00:06:36.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.692 EAL: request: mp_malloc_sync 00:06:36.692 EAL: No shared files mode enabled, IPC is disabled 00:06:36.692 EAL: Heap on socket 0 was expanded by 1026MB 00:06:36.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.951 EAL: request: mp_malloc_sync 00:06:36.951 EAL: No shared files mode enabled, IPC is disabled 00:06:36.951 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:36.951 passed 00:06:36.951 00:06:36.951 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.951 suites 1 1 n/a 0 0 00:06:36.951 tests 2 2 2 0 0 00:06:36.951 asserts 497 497 497 0 n/a 00:06:36.951 00:06:36.951 Elapsed time = 0.968 seconds 00:06:36.951 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.951 EAL: request: mp_malloc_sync 00:06:36.951 EAL: No shared files mode enabled, IPC is disabled 00:06:36.951 EAL: Heap on socket 0 was shrunk by 2MB 00:06:36.951 EAL: No shared files mode enabled, IPC is disabled 00:06:36.951 EAL: No shared files mode enabled, IPC is disabled 00:06:36.951 EAL: No shared files mode enabled, IPC is disabled 00:06:36.951 00:06:36.951 real 0m1.121s 00:06:36.951 user 0m0.635s 00:06:36.951 sys 0m0.446s 00:06:36.951 07:12:09 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.951 07:12:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:36.951 ************************************ 00:06:36.951 END TEST env_vtophys 00:06:36.951 ************************************ 00:06:36.951 07:12:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:36.951 07:12:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.951 07:12:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.951 07:12:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:36.951 ************************************ 00:06:36.951 START TEST env_pci 00:06:36.951 ************************************ 00:06:36.951 07:12:09 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:36.951 00:06:36.951 00:06:36.951 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.951 http://cunit.sourceforge.net/ 00:06:36.951 00:06:36.951 00:06:36.951 Suite: pci 00:06:36.951 Test: pci_hook ...[2024-07-25 07:12:09.442697] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2503298 has claimed it 00:06:37.211 EAL: Cannot find device (10000:00:01.0) 00:06:37.211 EAL: Failed to attach device on primary process 00:06:37.211 passed 00:06:37.211 00:06:37.211 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.211 suites 1 1 n/a 0 0 00:06:37.211 tests 1 1 1 0 0 00:06:37.211 asserts 25 25 25 0 n/a 00:06:37.211 00:06:37.211 Elapsed time = 0.039 seconds 00:06:37.211 00:06:37.211 real 0m0.056s 00:06:37.211 user 0m0.012s 00:06:37.211 sys 0m0.043s 00:06:37.211 07:12:09 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.211 07:12:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 ************************************ 00:06:37.211 END TEST env_pci 00:06:37.211 ************************************ 00:06:37.211 07:12:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:37.211 07:12:09 env -- env/env.sh@15 -- # uname 00:06:37.211 07:12:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:37.211 07:12:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:37.211 07:12:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:37.211 07:12:09 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:37.211 07:12:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.211 07:12:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 ************************************ 00:06:37.211 START TEST env_dpdk_post_init 00:06:37.211 ************************************ 00:06:37.211 07:12:09 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:37.211 EAL: Detected CPU lcores: 112 00:06:37.211 EAL: Detected NUMA nodes: 2 00:06:37.211 EAL: Detected shared linkage of DPDK 00:06:37.211 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:37.211 EAL: Selected IOVA mode 'VA' 00:06:37.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.211 EAL: VFIO support initialized 00:06:37.211 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:37.211 EAL: Using IOMMU type 1 (Type 1) 00:06:37.211 EAL: Ignore mapping IO port bar(1) 00:06:37.211 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:37.211 EAL: Ignore mapping IO port bar(1) 00:06:37.211 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:37.470 EAL: Ignore mapping IO port bar(1) 00:06:37.470 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:38.407 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:42.598 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:42.598 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:06:42.598 Starting DPDK initialization... 00:06:42.598 Starting SPDK post initialization... 00:06:42.598 SPDK NVMe probe 00:06:42.598 Attaching to 0000:d8:00.0 00:06:42.598 Attached to 0000:d8:00.0 00:06:42.598 Cleaning up... 00:06:42.598 00:06:42.598 real 0m5.232s 00:06:42.598 user 0m3.898s 00:06:42.598 sys 0m0.389s 00:06:42.598 07:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.598 07:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 END TEST env_dpdk_post_init 00:06:42.598 ************************************ 00:06:42.598 07:12:14 env -- env/env.sh@26 -- # uname 00:06:42.598 07:12:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:42.598 07:12:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:42.598 07:12:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.598 07:12:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.598 07:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 START TEST env_mem_callbacks 00:06:42.598 ************************************ 00:06:42.598 07:12:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:42.598 EAL: Detected CPU lcores: 112 00:06:42.598 EAL: Detected NUMA nodes: 2 00:06:42.598 EAL: Detected shared linkage of DPDK 00:06:42.598 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:42.598 EAL: Selected IOVA mode 'VA' 00:06:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.598 EAL: VFIO support initialized 00:06:42.598 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:42.598 00:06:42.598 00:06:42.598 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.598 http://cunit.sourceforge.net/ 00:06:42.598 00:06:42.598 00:06:42.598 Suite: memory 00:06:42.598 Test: test ... 00:06:42.598 register 0x200000200000 2097152 00:06:42.598 malloc 3145728 00:06:42.598 register 0x200000400000 4194304 00:06:42.598 buf 0x200000500000 len 3145728 PASSED 00:06:42.598 malloc 64 00:06:42.598 buf 0x2000004fff40 len 64 PASSED 00:06:42.598 malloc 4194304 00:06:42.598 register 0x200000800000 6291456 00:06:42.598 buf 0x200000a00000 len 4194304 PASSED 00:06:42.598 free 0x200000500000 3145728 00:06:42.598 free 0x2000004fff40 64 00:06:42.598 unregister 0x200000400000 4194304 PASSED 00:06:42.598 free 0x200000a00000 4194304 00:06:42.598 unregister 0x200000800000 6291456 PASSED 00:06:42.598 malloc 8388608 00:06:42.598 register 0x200000400000 10485760 00:06:42.598 buf 0x200000600000 len 8388608 PASSED 00:06:42.598 free 0x200000600000 8388608 00:06:42.598 unregister 0x200000400000 10485760 PASSED 00:06:42.598 passed 00:06:42.598 00:06:42.598 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.598 suites 1 1 n/a 0 0 00:06:42.598 tests 1 1 1 0 0 00:06:42.598 asserts 15 15 15 0 n/a 00:06:42.598 00:06:42.598 Elapsed time = 0.005 seconds 00:06:42.598 00:06:42.598 real 0m0.073s 00:06:42.598 user 0m0.017s 00:06:42.598 sys 0m0.056s 00:06:42.598 07:12:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.598 07:12:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 END TEST env_mem_callbacks 00:06:42.598 ************************************ 00:06:42.598 00:06:42.598 real 0m7.135s 00:06:42.598 user 0m4.887s 00:06:42.598 sys 0m1.302s 00:06:42.598 07:12:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.598 07:12:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 END TEST env 00:06:42.598 ************************************ 00:06:42.598 07:12:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:42.598 07:12:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.598 07:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.598 07:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 START TEST rpc 00:06:42.598 ************************************ 00:06:42.598 07:12:15 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:42.857 * Looking for test storage... 00:06:42.857 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:42.857 07:12:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2504410 00:06:42.857 07:12:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:42.857 07:12:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.857 07:12:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2504410 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 2504410 ']' 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.857 07:12:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.857 [2024-07-25 07:12:15.249419] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:42.857 [2024-07-25 07:12:15.249474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504410 ] 00:06:42.857 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.857 [2024-07-25 07:12:15.329831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.116 [2024-07-25 07:12:15.402917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:43.116 [2024-07-25 07:12:15.402954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2504410' to capture a snapshot of events at runtime. 00:06:43.116 [2024-07-25 07:12:15.402963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.116 [2024-07-25 07:12:15.402971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.116 [2024-07-25 07:12:15.402977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2504410 for offline analysis/debug. 00:06:43.116 [2024-07-25 07:12:15.402999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.685 07:12:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.685 07:12:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.685 07:12:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:43.685 07:12:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:43.685 07:12:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:43.685 07:12:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:43.685 07:12:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.685 07:12:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.685 07:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.685 ************************************ 00:06:43.685 START TEST rpc_integrity 00:06:43.685 ************************************ 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:43.685 { 00:06:43.685 "name": "Malloc0", 00:06:43.685 "aliases": [ 00:06:43.685 "f8ebbbf6-ce8d-4a45-949e-3a61ad5edb09" 00:06:43.685 ], 00:06:43.685 "product_name": "Malloc disk", 00:06:43.685 "block_size": 512, 00:06:43.685 "num_blocks": 16384, 00:06:43.685 "uuid": "f8ebbbf6-ce8d-4a45-949e-3a61ad5edb09", 00:06:43.685 "assigned_rate_limits": { 00:06:43.685 "rw_ios_per_sec": 0, 00:06:43.685 "rw_mbytes_per_sec": 0, 00:06:43.685 "r_mbytes_per_sec": 0, 00:06:43.685 "w_mbytes_per_sec": 0 00:06:43.685 }, 00:06:43.685 "claimed": false, 00:06:43.685 "zoned": false, 00:06:43.685 "supported_io_types": { 00:06:43.685 "read": true, 00:06:43.685 "write": true, 00:06:43.685 "unmap": true, 00:06:43.685 "flush": true, 00:06:43.685 "reset": true, 00:06:43.685 "nvme_admin": false, 00:06:43.685 "nvme_io": false, 00:06:43.685 "nvme_io_md": false, 00:06:43.685 "write_zeroes": true, 00:06:43.685 "zcopy": true, 00:06:43.685 "get_zone_info": false, 00:06:43.685 "zone_management": false, 00:06:43.685 "zone_append": false, 00:06:43.685 "compare": false, 00:06:43.685 "compare_and_write": false, 00:06:43.685 "abort": true, 00:06:43.685 "seek_hole": false, 00:06:43.685 "seek_data": false, 00:06:43.685 "copy": true, 00:06:43.685 "nvme_iov_md": false 00:06:43.685 }, 00:06:43.685 "memory_domains": [ 00:06:43.685 { 00:06:43.685 "dma_device_id": "system", 00:06:43.685 "dma_device_type": 1 00:06:43.685 }, 00:06:43.685 { 00:06:43.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.685 "dma_device_type": 2 00:06:43.685 } 00:06:43.685 ], 00:06:43.685 "driver_specific": {} 00:06:43.685 } 00:06:43.685 ]' 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:43.685 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.685 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.685 [2024-07-25 07:12:16.213847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:43.685 [2024-07-25 07:12:16.213877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.685 [2024-07-25 07:12:16.213891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca8ec0 00:06:43.685 [2024-07-25 07:12:16.213899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.945 [2024-07-25 07:12:16.214908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.945 [2024-07-25 07:12:16.214929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:43.945 Passthru0 00:06:43.945 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.945 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:43.945 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.945 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.945 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.945 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:43.945 { 00:06:43.945 "name": "Malloc0", 00:06:43.945 "aliases": [ 00:06:43.945 "f8ebbbf6-ce8d-4a45-949e-3a61ad5edb09" 00:06:43.945 ], 00:06:43.945 "product_name": "Malloc disk", 00:06:43.945 "block_size": 512, 00:06:43.945 "num_blocks": 16384, 00:06:43.945 "uuid": "f8ebbbf6-ce8d-4a45-949e-3a61ad5edb09", 00:06:43.945 "assigned_rate_limits": { 00:06:43.945 "rw_ios_per_sec": 0, 00:06:43.945 "rw_mbytes_per_sec": 0, 00:06:43.945 "r_mbytes_per_sec": 0, 00:06:43.945 "w_mbytes_per_sec": 0 00:06:43.945 }, 00:06:43.945 "claimed": true, 00:06:43.945 "claim_type": "exclusive_write", 00:06:43.945 "zoned": false, 00:06:43.945 "supported_io_types": { 00:06:43.945 "read": true, 00:06:43.945 "write": true, 00:06:43.945 "unmap": true, 00:06:43.945 "flush": true, 00:06:43.945 "reset": true, 00:06:43.945 "nvme_admin": false, 00:06:43.945 "nvme_io": false, 00:06:43.945 "nvme_io_md": false, 00:06:43.945 "write_zeroes": true, 00:06:43.945 "zcopy": true, 00:06:43.945 "get_zone_info": false, 00:06:43.945 "zone_management": false, 00:06:43.945 "zone_append": false, 00:06:43.945 "compare": false, 00:06:43.945 "compare_and_write": false, 00:06:43.945 "abort": true, 00:06:43.945 "seek_hole": false, 00:06:43.945 "seek_data": false, 00:06:43.945 "copy": true, 00:06:43.945 "nvme_iov_md": false 00:06:43.945 }, 00:06:43.945 "memory_domains": [ 00:06:43.945 { 00:06:43.945 "dma_device_id": "system", 00:06:43.945 "dma_device_type": 1 00:06:43.945 }, 00:06:43.945 { 00:06:43.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.945 "dma_device_type": 2 00:06:43.945 } 00:06:43.945 ], 00:06:43.945 "driver_specific": {} 00:06:43.945 }, 00:06:43.945 { 00:06:43.945 "name": "Passthru0", 00:06:43.945 "aliases": [ 00:06:43.945 "d45e632b-7e69-5e91-bbd6-0e50ae71a7c7" 00:06:43.945 ], 00:06:43.945 "product_name": "passthru", 00:06:43.945 "block_size": 512, 00:06:43.945 "num_blocks": 16384, 00:06:43.945 "uuid": "d45e632b-7e69-5e91-bbd6-0e50ae71a7c7", 00:06:43.945 "assigned_rate_limits": { 00:06:43.945 "rw_ios_per_sec": 0, 00:06:43.945 "rw_mbytes_per_sec": 0, 00:06:43.945 "r_mbytes_per_sec": 0, 00:06:43.945 "w_mbytes_per_sec": 0 00:06:43.945 }, 00:06:43.945 "claimed": false, 00:06:43.945 "zoned": false, 00:06:43.945 "supported_io_types": { 00:06:43.945 "read": true, 00:06:43.945 "write": true, 00:06:43.946 "unmap": true, 00:06:43.946 "flush": true, 00:06:43.946 "reset": true, 00:06:43.946 "nvme_admin": false, 00:06:43.946 "nvme_io": false, 00:06:43.946 "nvme_io_md": false, 00:06:43.946 "write_zeroes": true, 00:06:43.946 "zcopy": true, 00:06:43.946 "get_zone_info": false, 00:06:43.946 "zone_management": false, 00:06:43.946 "zone_append": false, 00:06:43.946 "compare": false, 00:06:43.946 "compare_and_write": false, 00:06:43.946 "abort": true, 00:06:43.946 "seek_hole": false, 00:06:43.946 "seek_data": false, 00:06:43.946 "copy": true, 00:06:43.946 "nvme_iov_md": false 00:06:43.946 }, 00:06:43.946 "memory_domains": [ 00:06:43.946 { 00:06:43.946 "dma_device_id": "system", 00:06:43.946 "dma_device_type": 1 00:06:43.946 }, 00:06:43.946 { 00:06:43.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.946 "dma_device_type": 2 00:06:43.946 } 00:06:43.946 ], 00:06:43.946 "driver_specific": { 00:06:43.946 "passthru": { 00:06:43.946 "name": "Passthru0", 00:06:43.946 "base_bdev_name": "Malloc0" 00:06:43.946 } 00:06:43.946 } 00:06:43.946 } 00:06:43.946 ]' 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:43.946 07:12:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:43.946 00:06:43.946 real 0m0.299s 00:06:43.946 user 0m0.186s 00:06:43.946 sys 0m0.054s 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 ************************************ 00:06:43.946 END TEST rpc_integrity 00:06:43.946 ************************************ 00:06:43.946 07:12:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:43.946 07:12:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.946 07:12:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.946 07:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 ************************************ 00:06:43.946 START TEST rpc_plugins 00:06:43.946 ************************************ 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:43.946 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.946 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:43.946 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.946 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.205 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:44.205 { 00:06:44.205 "name": "Malloc1", 00:06:44.205 "aliases": [ 00:06:44.205 "52503e35-8736-4315-8ff1-0598f4cf2fa6" 00:06:44.205 ], 00:06:44.205 "product_name": "Malloc disk", 00:06:44.205 "block_size": 4096, 00:06:44.205 "num_blocks": 256, 00:06:44.205 "uuid": "52503e35-8736-4315-8ff1-0598f4cf2fa6", 00:06:44.205 "assigned_rate_limits": { 00:06:44.205 "rw_ios_per_sec": 0, 00:06:44.205 "rw_mbytes_per_sec": 0, 00:06:44.205 "r_mbytes_per_sec": 0, 00:06:44.205 "w_mbytes_per_sec": 0 00:06:44.205 }, 00:06:44.205 "claimed": false, 00:06:44.205 "zoned": false, 00:06:44.205 "supported_io_types": { 00:06:44.205 "read": true, 00:06:44.205 "write": true, 00:06:44.205 "unmap": true, 00:06:44.205 "flush": true, 00:06:44.205 "reset": true, 00:06:44.205 "nvme_admin": false, 00:06:44.205 "nvme_io": false, 00:06:44.205 "nvme_io_md": false, 00:06:44.205 "write_zeroes": true, 00:06:44.205 "zcopy": true, 00:06:44.205 "get_zone_info": false, 00:06:44.205 "zone_management": false, 00:06:44.205 "zone_append": false, 00:06:44.205 "compare": false, 00:06:44.205 "compare_and_write": false, 00:06:44.205 "abort": true, 00:06:44.205 "seek_hole": false, 00:06:44.205 "seek_data": false, 00:06:44.205 "copy": true, 00:06:44.205 "nvme_iov_md": false 00:06:44.205 }, 00:06:44.205 "memory_domains": [ 00:06:44.205 { 00:06:44.205 "dma_device_id": "system", 00:06:44.206 "dma_device_type": 1 00:06:44.206 }, 00:06:44.206 { 00:06:44.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.206 "dma_device_type": 2 00:06:44.206 } 00:06:44.206 ], 00:06:44.206 "driver_specific": {} 00:06:44.206 } 00:06:44.206 ]' 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:44.206 07:12:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:44.206 00:06:44.206 real 0m0.137s 00:06:44.206 user 0m0.078s 00:06:44.206 sys 0m0.026s 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.206 07:12:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 ************************************ 00:06:44.206 END TEST rpc_plugins 00:06:44.206 ************************************ 00:06:44.206 07:12:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:44.206 07:12:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.206 07:12:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.206 07:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 ************************************ 00:06:44.206 START TEST rpc_trace_cmd_test 00:06:44.206 ************************************ 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:44.206 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2504410", 00:06:44.206 "tpoint_group_mask": "0x8", 00:06:44.206 "iscsi_conn": { 00:06:44.206 "mask": "0x2", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "scsi": { 00:06:44.206 "mask": "0x4", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "bdev": { 00:06:44.206 "mask": "0x8", 00:06:44.206 "tpoint_mask": "0xffffffffffffffff" 00:06:44.206 }, 00:06:44.206 "nvmf_rdma": { 00:06:44.206 "mask": "0x10", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "nvmf_tcp": { 00:06:44.206 "mask": "0x20", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "ftl": { 00:06:44.206 "mask": "0x40", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "blobfs": { 00:06:44.206 "mask": "0x80", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "dsa": { 00:06:44.206 "mask": "0x200", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "thread": { 00:06:44.206 "mask": "0x400", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "nvme_pcie": { 00:06:44.206 "mask": "0x800", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "iaa": { 00:06:44.206 "mask": "0x1000", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "nvme_tcp": { 00:06:44.206 "mask": "0x2000", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "bdev_nvme": { 00:06:44.206 "mask": "0x4000", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 }, 00:06:44.206 "sock": { 00:06:44.206 "mask": "0x8000", 00:06:44.206 "tpoint_mask": "0x0" 00:06:44.206 } 00:06:44.206 }' 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:44.206 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:44.465 00:06:44.465 real 0m0.236s 00:06:44.465 user 0m0.188s 00:06:44.465 sys 0m0.040s 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.465 07:12:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.465 ************************************ 00:06:44.465 END TEST rpc_trace_cmd_test 00:06:44.465 ************************************ 00:06:44.465 07:12:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:44.465 07:12:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:44.465 07:12:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:44.465 07:12:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.465 07:12:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.465 07:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.465 ************************************ 00:06:44.465 START TEST rpc_daemon_integrity 00:06:44.465 ************************************ 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:44.465 07:12:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:44.724 { 00:06:44.724 "name": "Malloc2", 00:06:44.724 "aliases": [ 00:06:44.724 "fa35b720-9c59-4d36-9d48-25c55039d931" 00:06:44.724 ], 00:06:44.724 "product_name": "Malloc disk", 00:06:44.724 "block_size": 512, 00:06:44.724 "num_blocks": 16384, 00:06:44.724 "uuid": "fa35b720-9c59-4d36-9d48-25c55039d931", 00:06:44.724 "assigned_rate_limits": { 00:06:44.724 "rw_ios_per_sec": 0, 00:06:44.724 "rw_mbytes_per_sec": 0, 00:06:44.724 "r_mbytes_per_sec": 0, 00:06:44.724 "w_mbytes_per_sec": 0 00:06:44.724 }, 00:06:44.724 "claimed": false, 00:06:44.724 "zoned": false, 00:06:44.724 "supported_io_types": { 00:06:44.724 "read": true, 00:06:44.724 "write": true, 00:06:44.724 "unmap": true, 00:06:44.724 "flush": true, 00:06:44.724 "reset": true, 00:06:44.724 "nvme_admin": false, 00:06:44.724 "nvme_io": false, 00:06:44.724 "nvme_io_md": false, 00:06:44.724 "write_zeroes": true, 00:06:44.724 "zcopy": true, 00:06:44.724 "get_zone_info": false, 00:06:44.724 "zone_management": false, 00:06:44.724 "zone_append": false, 00:06:44.724 "compare": false, 00:06:44.724 "compare_and_write": false, 00:06:44.724 "abort": true, 00:06:44.724 "seek_hole": false, 00:06:44.724 "seek_data": false, 00:06:44.724 "copy": true, 00:06:44.724 "nvme_iov_md": false 00:06:44.724 }, 00:06:44.724 "memory_domains": [ 00:06:44.724 { 00:06:44.724 "dma_device_id": "system", 00:06:44.724 "dma_device_type": 1 00:06:44.724 }, 00:06:44.724 { 00:06:44.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.724 "dma_device_type": 2 00:06:44.724 } 00:06:44.724 ], 00:06:44.724 "driver_specific": {} 00:06:44.724 } 00:06:44.724 ]' 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.724 [2024-07-25 07:12:17.116280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:44.724 [2024-07-25 07:12:17.116307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.724 [2024-07-25 07:12:17.116320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e52e00 00:06:44.724 [2024-07-25 07:12:17.116328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.724 [2024-07-25 07:12:17.117257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.724 [2024-07-25 07:12:17.117278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:44.724 Passthru0 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.724 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:44.724 { 00:06:44.724 "name": "Malloc2", 00:06:44.724 "aliases": [ 00:06:44.724 "fa35b720-9c59-4d36-9d48-25c55039d931" 00:06:44.724 ], 00:06:44.725 "product_name": "Malloc disk", 00:06:44.725 "block_size": 512, 00:06:44.725 "num_blocks": 16384, 00:06:44.725 "uuid": "fa35b720-9c59-4d36-9d48-25c55039d931", 00:06:44.725 "assigned_rate_limits": { 00:06:44.725 "rw_ios_per_sec": 0, 00:06:44.725 "rw_mbytes_per_sec": 0, 00:06:44.725 "r_mbytes_per_sec": 0, 00:06:44.725 "w_mbytes_per_sec": 0 00:06:44.725 }, 00:06:44.725 "claimed": true, 00:06:44.725 "claim_type": "exclusive_write", 00:06:44.725 "zoned": false, 00:06:44.725 "supported_io_types": { 00:06:44.725 "read": true, 00:06:44.725 "write": true, 00:06:44.725 "unmap": true, 00:06:44.725 "flush": true, 00:06:44.725 "reset": true, 00:06:44.725 "nvme_admin": false, 00:06:44.725 "nvme_io": false, 00:06:44.725 "nvme_io_md": false, 00:06:44.725 "write_zeroes": true, 00:06:44.725 "zcopy": true, 00:06:44.725 "get_zone_info": false, 00:06:44.725 "zone_management": false, 00:06:44.725 "zone_append": false, 00:06:44.725 "compare": false, 00:06:44.725 "compare_and_write": false, 00:06:44.725 "abort": true, 00:06:44.725 "seek_hole": false, 00:06:44.725 "seek_data": false, 00:06:44.725 "copy": true, 00:06:44.725 "nvme_iov_md": false 00:06:44.725 }, 00:06:44.725 "memory_domains": [ 00:06:44.725 { 00:06:44.725 "dma_device_id": "system", 00:06:44.725 "dma_device_type": 1 00:06:44.725 }, 00:06:44.725 { 00:06:44.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.725 "dma_device_type": 2 00:06:44.725 } 00:06:44.725 ], 00:06:44.725 "driver_specific": {} 00:06:44.725 }, 00:06:44.725 { 00:06:44.725 "name": "Passthru0", 00:06:44.725 "aliases": [ 00:06:44.725 "129b71ee-6cff-5eb7-b049-675d8f7c04d6" 00:06:44.725 ], 00:06:44.725 "product_name": "passthru", 00:06:44.725 "block_size": 512, 00:06:44.725 "num_blocks": 16384, 00:06:44.725 "uuid": "129b71ee-6cff-5eb7-b049-675d8f7c04d6", 00:06:44.725 "assigned_rate_limits": { 00:06:44.725 "rw_ios_per_sec": 0, 00:06:44.725 "rw_mbytes_per_sec": 0, 00:06:44.725 "r_mbytes_per_sec": 0, 00:06:44.725 "w_mbytes_per_sec": 0 00:06:44.725 }, 00:06:44.725 "claimed": false, 00:06:44.725 "zoned": false, 00:06:44.725 "supported_io_types": { 00:06:44.725 "read": true, 00:06:44.725 "write": true, 00:06:44.725 "unmap": true, 00:06:44.725 "flush": true, 00:06:44.725 "reset": true, 00:06:44.725 "nvme_admin": false, 00:06:44.725 "nvme_io": false, 00:06:44.725 "nvme_io_md": false, 00:06:44.725 "write_zeroes": true, 00:06:44.725 "zcopy": true, 00:06:44.725 "get_zone_info": false, 00:06:44.725 "zone_management": false, 00:06:44.725 "zone_append": false, 00:06:44.725 "compare": false, 00:06:44.725 "compare_and_write": false, 00:06:44.725 "abort": true, 00:06:44.725 "seek_hole": false, 00:06:44.725 "seek_data": false, 00:06:44.725 "copy": true, 00:06:44.725 "nvme_iov_md": false 00:06:44.725 }, 00:06:44.725 "memory_domains": [ 00:06:44.725 { 00:06:44.725 "dma_device_id": "system", 00:06:44.725 "dma_device_type": 1 00:06:44.725 }, 00:06:44.725 { 00:06:44.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.725 "dma_device_type": 2 00:06:44.725 } 00:06:44.725 ], 00:06:44.725 "driver_specific": { 00:06:44.725 "passthru": { 00:06:44.725 "name": "Passthru0", 00:06:44.725 "base_bdev_name": "Malloc2" 00:06:44.725 } 00:06:44.725 } 00:06:44.725 } 00:06:44.725 ]' 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:44.725 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:44.983 07:12:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:44.983 00:06:44.983 real 0m0.284s 00:06:44.983 user 0m0.167s 00:06:44.983 sys 0m0.056s 00:06:44.983 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.983 07:12:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.983 ************************************ 00:06:44.983 END TEST rpc_daemon_integrity 00:06:44.983 ************************************ 00:06:44.983 07:12:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:44.984 07:12:17 rpc -- rpc/rpc.sh@84 -- # killprocess 2504410 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 2504410 ']' 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@954 -- # kill -0 2504410 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@955 -- # uname 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2504410 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2504410' 00:06:44.984 killing process with pid 2504410 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@969 -- # kill 2504410 00:06:44.984 07:12:17 rpc -- common/autotest_common.sh@974 -- # wait 2504410 00:06:45.242 00:06:45.242 real 0m2.580s 00:06:45.242 user 0m3.257s 00:06:45.242 sys 0m0.837s 00:06:45.242 07:12:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.242 07:12:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.242 ************************************ 00:06:45.242 END TEST rpc 00:06:45.242 ************************************ 00:06:45.243 07:12:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:45.243 07:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.243 07:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.243 07:12:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.243 ************************************ 00:06:45.243 START TEST skip_rpc 00:06:45.243 ************************************ 00:06:45.243 07:12:17 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:45.501 * Looking for test storage... 00:06:45.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:45.501 07:12:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:45.501 07:12:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:45.501 07:12:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:45.501 07:12:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.501 07:12:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.501 07:12:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 ************************************ 00:06:45.501 START TEST skip_rpc 00:06:45.501 ************************************ 00:06:45.501 07:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:45.501 07:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2504942 00:06:45.501 07:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.501 07:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:45.501 07:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:45.501 [2024-07-25 07:12:17.942603] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:45.501 [2024-07-25 07:12:17.942666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504942 ] 00:06:45.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.501 [2024-07-25 07:12:18.028616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.758 [2024-07-25 07:12:18.099620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2504942 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2504942 ']' 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2504942 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2504942 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2504942' 00:06:51.092 killing process with pid 2504942 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2504942 00:06:51.092 07:12:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2504942 00:06:51.092 00:06:51.092 real 0m5.375s 00:06:51.092 user 0m5.109s 00:06:51.092 sys 0m0.302s 00:06:51.092 07:12:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.092 07:12:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.092 ************************************ 00:06:51.092 END TEST skip_rpc 00:06:51.092 ************************************ 00:06:51.092 07:12:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:51.092 07:12:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.092 07:12:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.092 07:12:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.092 ************************************ 00:06:51.092 START TEST skip_rpc_with_json 00:06:51.092 ************************************ 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2506026 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2506026 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2506026 ']' 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.092 07:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.092 [2024-07-25 07:12:23.378691] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:51.092 [2024-07-25 07:12:23.378732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506026 ] 00:06:51.092 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.092 [2024-07-25 07:12:23.460795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.092 [2024-07-25 07:12:23.532952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 [2024-07-25 07:12:24.168687] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:51.660 request: 00:06:51.660 { 00:06:51.660 "trtype": "tcp", 00:06:51.660 "method": "nvmf_get_transports", 00:06:51.660 "req_id": 1 00:06:51.660 } 00:06:51.660 Got JSON-RPC error response 00:06:51.660 response: 00:06:51.660 { 00:06:51.660 "code": -19, 00:06:51.660 "message": "No such device" 00:06:51.660 } 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 [2024-07-25 07:12:24.180798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.660 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.920 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.920 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:51.920 { 00:06:51.920 "subsystems": [ 00:06:51.920 { 00:06:51.920 "subsystem": "keyring", 00:06:51.920 "config": [] 00:06:51.920 }, 00:06:51.920 { 00:06:51.920 "subsystem": "iobuf", 00:06:51.920 "config": [ 00:06:51.920 { 00:06:51.920 "method": "iobuf_set_options", 00:06:51.920 "params": { 00:06:51.920 "small_pool_count": 8192, 00:06:51.920 "large_pool_count": 1024, 00:06:51.920 "small_bufsize": 8192, 00:06:51.920 "large_bufsize": 135168 00:06:51.920 } 00:06:51.920 } 00:06:51.920 ] 00:06:51.920 }, 00:06:51.920 { 00:06:51.920 "subsystem": "sock", 00:06:51.920 "config": [ 00:06:51.920 { 00:06:51.920 "method": "sock_set_default_impl", 00:06:51.920 "params": { 00:06:51.921 "impl_name": "posix" 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "sock_impl_set_options", 00:06:51.921 "params": { 00:06:51.921 "impl_name": "ssl", 00:06:51.921 "recv_buf_size": 4096, 00:06:51.921 "send_buf_size": 4096, 00:06:51.921 "enable_recv_pipe": true, 00:06:51.921 "enable_quickack": false, 00:06:51.921 "enable_placement_id": 0, 00:06:51.921 "enable_zerocopy_send_server": true, 00:06:51.921 "enable_zerocopy_send_client": false, 00:06:51.921 "zerocopy_threshold": 0, 00:06:51.921 "tls_version": 0, 00:06:51.921 "enable_ktls": false 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "sock_impl_set_options", 00:06:51.921 "params": { 00:06:51.921 "impl_name": "posix", 00:06:51.921 "recv_buf_size": 2097152, 00:06:51.921 "send_buf_size": 2097152, 00:06:51.921 "enable_recv_pipe": true, 00:06:51.921 "enable_quickack": false, 00:06:51.921 "enable_placement_id": 0, 00:06:51.921 "enable_zerocopy_send_server": true, 00:06:51.921 "enable_zerocopy_send_client": false, 00:06:51.921 "zerocopy_threshold": 0, 00:06:51.921 "tls_version": 0, 00:06:51.921 "enable_ktls": false 00:06:51.921 } 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "vmd", 00:06:51.921 "config": [] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "accel", 00:06:51.921 "config": [ 00:06:51.921 { 00:06:51.921 "method": "accel_set_options", 00:06:51.921 "params": { 00:06:51.921 "small_cache_size": 128, 00:06:51.921 "large_cache_size": 16, 00:06:51.921 "task_count": 2048, 00:06:51.921 "sequence_count": 2048, 00:06:51.921 "buf_count": 2048 00:06:51.921 } 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "bdev", 00:06:51.921 "config": [ 00:06:51.921 { 00:06:51.921 "method": "bdev_set_options", 00:06:51.921 "params": { 00:06:51.921 "bdev_io_pool_size": 65535, 00:06:51.921 "bdev_io_cache_size": 256, 00:06:51.921 "bdev_auto_examine": true, 00:06:51.921 "iobuf_small_cache_size": 128, 00:06:51.921 "iobuf_large_cache_size": 16 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "bdev_raid_set_options", 00:06:51.921 "params": { 00:06:51.921 "process_window_size_kb": 1024, 00:06:51.921 "process_max_bandwidth_mb_sec": 0 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "bdev_iscsi_set_options", 00:06:51.921 "params": { 00:06:51.921 "timeout_sec": 30 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "bdev_nvme_set_options", 00:06:51.921 "params": { 00:06:51.921 "action_on_timeout": "none", 00:06:51.921 "timeout_us": 0, 00:06:51.921 "timeout_admin_us": 0, 00:06:51.921 "keep_alive_timeout_ms": 10000, 00:06:51.921 "arbitration_burst": 0, 00:06:51.921 "low_priority_weight": 0, 00:06:51.921 "medium_priority_weight": 0, 00:06:51.921 "high_priority_weight": 0, 00:06:51.921 "nvme_adminq_poll_period_us": 10000, 00:06:51.921 "nvme_ioq_poll_period_us": 0, 00:06:51.921 "io_queue_requests": 0, 00:06:51.921 "delay_cmd_submit": true, 00:06:51.921 "transport_retry_count": 4, 00:06:51.921 "bdev_retry_count": 3, 00:06:51.921 "transport_ack_timeout": 0, 00:06:51.921 "ctrlr_loss_timeout_sec": 0, 00:06:51.921 "reconnect_delay_sec": 0, 00:06:51.921 "fast_io_fail_timeout_sec": 0, 00:06:51.921 "disable_auto_failback": false, 00:06:51.921 "generate_uuids": false, 00:06:51.921 "transport_tos": 0, 00:06:51.921 "nvme_error_stat": false, 00:06:51.921 "rdma_srq_size": 0, 00:06:51.921 "io_path_stat": false, 00:06:51.921 "allow_accel_sequence": false, 00:06:51.921 "rdma_max_cq_size": 0, 00:06:51.921 "rdma_cm_event_timeout_ms": 0, 00:06:51.921 "dhchap_digests": [ 00:06:51.921 "sha256", 00:06:51.921 "sha384", 00:06:51.921 "sha512" 00:06:51.921 ], 00:06:51.921 "dhchap_dhgroups": [ 00:06:51.921 "null", 00:06:51.921 "ffdhe2048", 00:06:51.921 "ffdhe3072", 00:06:51.921 "ffdhe4096", 00:06:51.921 "ffdhe6144", 00:06:51.921 "ffdhe8192" 00:06:51.921 ] 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "bdev_nvme_set_hotplug", 00:06:51.921 "params": { 00:06:51.921 "period_us": 100000, 00:06:51.921 "enable": false 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "bdev_wait_for_examine" 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "scsi", 00:06:51.921 "config": null 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "scheduler", 00:06:51.921 "config": [ 00:06:51.921 { 00:06:51.921 "method": "framework_set_scheduler", 00:06:51.921 "params": { 00:06:51.921 "name": "static" 00:06:51.921 } 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "vhost_scsi", 00:06:51.921 "config": [] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "vhost_blk", 00:06:51.921 "config": [] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "ublk", 00:06:51.921 "config": [] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "nbd", 00:06:51.921 "config": [] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "nvmf", 00:06:51.921 "config": [ 00:06:51.921 { 00:06:51.921 "method": "nvmf_set_config", 00:06:51.921 "params": { 00:06:51.921 "discovery_filter": "match_any", 00:06:51.921 "admin_cmd_passthru": { 00:06:51.921 "identify_ctrlr": false 00:06:51.921 } 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "nvmf_set_max_subsystems", 00:06:51.921 "params": { 00:06:51.921 "max_subsystems": 1024 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "nvmf_set_crdt", 00:06:51.921 "params": { 00:06:51.921 "crdt1": 0, 00:06:51.921 "crdt2": 0, 00:06:51.921 "crdt3": 0 00:06:51.921 } 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "method": "nvmf_create_transport", 00:06:51.921 "params": { 00:06:51.921 "trtype": "TCP", 00:06:51.921 "max_queue_depth": 128, 00:06:51.921 "max_io_qpairs_per_ctrlr": 127, 00:06:51.921 "in_capsule_data_size": 4096, 00:06:51.921 "max_io_size": 131072, 00:06:51.921 "io_unit_size": 131072, 00:06:51.921 "max_aq_depth": 128, 00:06:51.921 "num_shared_buffers": 511, 00:06:51.921 "buf_cache_size": 4294967295, 00:06:51.921 "dif_insert_or_strip": false, 00:06:51.921 "zcopy": false, 00:06:51.921 "c2h_success": true, 00:06:51.921 "sock_priority": 0, 00:06:51.921 "abort_timeout_sec": 1, 00:06:51.921 "ack_timeout": 0, 00:06:51.921 "data_wr_pool_size": 0 00:06:51.921 } 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 }, 00:06:51.921 { 00:06:51.921 "subsystem": "iscsi", 00:06:51.921 "config": [ 00:06:51.921 { 00:06:51.921 "method": "iscsi_set_options", 00:06:51.921 "params": { 00:06:51.921 "node_base": "iqn.2016-06.io.spdk", 00:06:51.921 "max_sessions": 128, 00:06:51.921 "max_connections_per_session": 2, 00:06:51.921 "max_queue_depth": 64, 00:06:51.921 "default_time2wait": 2, 00:06:51.921 "default_time2retain": 20, 00:06:51.921 "first_burst_length": 8192, 00:06:51.921 "immediate_data": true, 00:06:51.921 "allow_duplicated_isid": false, 00:06:51.921 "error_recovery_level": 0, 00:06:51.921 "nop_timeout": 60, 00:06:51.921 "nop_in_interval": 30, 00:06:51.921 "disable_chap": false, 00:06:51.921 "require_chap": false, 00:06:51.921 "mutual_chap": false, 00:06:51.921 "chap_group": 0, 00:06:51.921 "max_large_datain_per_connection": 64, 00:06:51.921 "max_r2t_per_connection": 4, 00:06:51.921 "pdu_pool_size": 36864, 00:06:51.921 "immediate_data_pool_size": 16384, 00:06:51.921 "data_out_pool_size": 2048 00:06:51.921 } 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 } 00:06:51.921 ] 00:06:51.921 } 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2506026 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2506026 ']' 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2506026 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2506026 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.921 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2506026' 00:06:51.921 killing process with pid 2506026 00:06:51.922 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2506026 00:06:51.922 07:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2506026 00:06:52.181 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2506304 00:06:52.181 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:52.181 07:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2506304 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2506304 ']' 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2506304 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2506304 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2506304' 00:06:57.455 killing process with pid 2506304 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2506304 00:06:57.455 07:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2506304 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:57.715 00:06:57.715 real 0m6.737s 00:06:57.715 user 0m6.503s 00:06:57.715 sys 0m0.663s 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.715 ************************************ 00:06:57.715 END TEST skip_rpc_with_json 00:06:57.715 ************************************ 00:06:57.715 07:12:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:57.715 07:12:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.715 07:12:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.715 07:12:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.715 ************************************ 00:06:57.715 START TEST skip_rpc_with_delay 00:06:57.715 ************************************ 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.715 [2024-07-25 07:12:30.187978] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:57.715 [2024-07-25 07:12:30.188042] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.715 00:06:57.715 real 0m0.055s 00:06:57.715 user 0m0.030s 00:06:57.715 sys 0m0.025s 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.715 07:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:57.715 ************************************ 00:06:57.715 END TEST skip_rpc_with_delay 00:06:57.715 ************************************ 00:06:57.715 07:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:57.975 07:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:57.975 07:12:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:57.975 07:12:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.975 07:12:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.975 07:12:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.975 ************************************ 00:06:57.975 START TEST exit_on_failed_rpc_init 00:06:57.975 ************************************ 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2507164 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2507164 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2507164 ']' 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.975 07:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:57.975 [2024-07-25 07:12:30.320711] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:57.975 [2024-07-25 07:12:30.320755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507164 ] 00:06:57.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.975 [2024-07-25 07:12:30.405107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.975 [2024-07-25 07:12:30.477411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.913 [2024-07-25 07:12:31.157774] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:58.913 [2024-07-25 07:12:31.157824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507428 ] 00:06:58.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.913 [2024-07-25 07:12:31.234881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.913 [2024-07-25 07:12:31.303435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.913 [2024-07-25 07:12:31.303503] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:58.913 [2024-07-25 07:12:31.303515] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:58.913 [2024-07-25 07:12:31.303523] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2507164 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2507164 ']' 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2507164 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507164 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507164' 00:06:58.913 killing process with pid 2507164 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2507164 00:06:58.913 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2507164 00:06:59.481 00:06:59.481 real 0m1.462s 00:06:59.481 user 0m1.615s 00:06:59.481 sys 0m0.474s 00:06:59.481 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.481 07:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:59.481 ************************************ 00:06:59.481 END TEST exit_on_failed_rpc_init 00:06:59.481 ************************************ 00:06:59.481 07:12:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:59.481 00:06:59.481 real 0m14.030s 00:06:59.481 user 0m13.415s 00:06:59.481 sys 0m1.737s 00:06:59.481 07:12:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.481 07:12:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.481 ************************************ 00:06:59.481 END TEST skip_rpc 00:06:59.481 ************************************ 00:06:59.481 07:12:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:59.481 07:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.481 07:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.481 07:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:59.481 ************************************ 00:06:59.481 START TEST rpc_client 00:06:59.481 ************************************ 00:06:59.481 07:12:31 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:59.481 * Looking for test storage... 00:06:59.481 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:59.481 07:12:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:59.481 OK 00:06:59.481 07:12:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:59.481 00:06:59.481 real 0m0.139s 00:06:59.481 user 0m0.061s 00:06:59.481 sys 0m0.088s 00:06:59.481 07:12:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.481 07:12:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:59.481 ************************************ 00:06:59.481 END TEST rpc_client 00:06:59.481 ************************************ 00:06:59.741 07:12:32 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:59.741 07:12:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.741 07:12:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.741 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:59.741 ************************************ 00:06:59.741 START TEST json_config 00:06:59.741 ************************************ 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:59.741 07:12:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.741 07:12:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.741 07:12:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.741 07:12:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.741 07:12:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.741 07:12:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.741 07:12:32 json_config -- paths/export.sh@5 -- # export PATH 00:06:59.741 07:12:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@47 -- # : 0 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.741 07:12:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:59.741 INFO: JSON configuration test init 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.741 07:12:32 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:59.741 07:12:32 json_config -- json_config/common.sh@9 -- # local app=target 00:06:59.741 07:12:32 json_config -- json_config/common.sh@10 -- # shift 00:06:59.741 07:12:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:59.741 07:12:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:59.741 07:12:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:59.741 07:12:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.741 07:12:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.741 07:12:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2507692 00:06:59.741 07:12:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:59.741 Waiting for target to run... 00:06:59.741 07:12:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2507692 /var/tmp/spdk_tgt.sock 00:06:59.741 07:12:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 2507692 ']' 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.741 07:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.741 [2024-07-25 07:12:32.181193] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:59.741 [2024-07-25 07:12:32.181247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507692 ] 00:06:59.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.000 [2024-07-25 07:12:32.482447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.259 [2024-07-25 07:12:32.545571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.517 07:12:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.517 07:12:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:00.517 07:12:32 json_config -- json_config/common.sh@26 -- # echo '' 00:07:00.517 00:07:00.517 07:12:32 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:00.517 07:12:32 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:00.518 07:12:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.518 07:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.518 07:12:32 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:00.518 07:12:32 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:00.518 07:12:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.518 07:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.518 07:12:33 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:00.518 07:12:33 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:00.518 07:12:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:03.803 07:12:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.803 07:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:03.803 07:12:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@51 -- # sort 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:03.803 07:12:36 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:03.803 07:12:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.803 07:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:04.062 07:12:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.062 07:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:07:04.062 07:12:36 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.062 07:12:36 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:04.062 07:12:36 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:04.062 07:12:36 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:07:04.062 07:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@296 -- # e810=() 00:07:12.180 07:12:44 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@297 -- # x722=() 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@298 -- # mlx=() 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:12.181 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:12.181 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:12.181 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:12.181 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@58 -- # uname 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:12.181 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:12.181 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:12.181 altname enp217s0f0np0 00:07:12.181 altname ens818f0np0 00:07:12.181 inet 192.168.100.8/24 scope global mlx_0_0 00:07:12.181 valid_lft forever preferred_lft forever 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:12.181 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:12.181 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:12.181 altname enp217s0f1np1 00:07:12.181 altname ens818f1np1 00:07:12.181 inet 192.168.100.9/24 scope global mlx_0_1 00:07:12.181 valid_lft forever preferred_lft forever 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@422 -- # return 0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:12.181 07:12:44 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:12.182 192.168.100.9' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:12.182 192.168.100.9' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@457 -- # head -n 1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:12.182 192.168.100.9' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@458 -- # head -n 1 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:12.182 07:12:44 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:12.441 07:12:44 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:07:12.441 07:12:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:12.441 07:12:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:12.441 MallocForNvmf0 00:07:12.441 07:12:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:12.441 07:12:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:12.700 MallocForNvmf1 00:07:12.700 07:12:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:12.700 07:12:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:12.700 [2024-07-25 07:12:45.205892] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:12.959 [2024-07-25 07:12:45.234579] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x905760/0xa31ec0) succeed. 00:07:12.959 [2024-07-25 07:12:45.246516] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x907950/0x9b1e40) succeed. 00:07:12.959 07:12:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:12.959 07:12:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:12.959 07:12:45 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:12.959 07:12:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:13.218 07:12:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:13.218 07:12:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:13.477 07:12:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:13.477 07:12:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:13.477 [2024-07-25 07:12:45.937929] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:13.477 07:12:45 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:13.477 07:12:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.477 07:12:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 07:12:46 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:13.477 07:12:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.477 07:12:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.736 07:12:46 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:13.736 07:12:46 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:13.736 07:12:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:13.736 MallocBdevForConfigChangeCheck 00:07:13.736 07:12:46 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:13.736 07:12:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.736 07:12:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.736 07:12:46 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:13.736 07:12:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:14.303 07:12:46 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:14.303 INFO: shutting down applications... 00:07:14.303 07:12:46 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:14.303 07:12:46 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:14.304 07:12:46 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:14.304 07:12:46 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:16.872 Calling clear_iscsi_subsystem 00:07:16.872 Calling clear_nvmf_subsystem 00:07:16.872 Calling clear_nbd_subsystem 00:07:16.872 Calling clear_ublk_subsystem 00:07:16.872 Calling clear_vhost_blk_subsystem 00:07:16.872 Calling clear_vhost_scsi_subsystem 00:07:16.872 Calling clear_bdev_subsystem 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@349 -- # break 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:16.872 07:12:49 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:16.872 07:12:49 json_config -- json_config/common.sh@31 -- # local app=target 00:07:16.872 07:12:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:16.872 07:12:49 json_config -- json_config/common.sh@35 -- # [[ -n 2507692 ]] 00:07:16.872 07:12:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2507692 00:07:16.872 07:12:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:16.872 07:12:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:16.872 07:12:49 json_config -- json_config/common.sh@41 -- # kill -0 2507692 00:07:16.872 07:12:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:17.441 07:12:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:17.441 07:12:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.441 07:12:49 json_config -- json_config/common.sh@41 -- # kill -0 2507692 00:07:17.441 07:12:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:17.441 07:12:49 json_config -- json_config/common.sh@43 -- # break 00:07:17.441 07:12:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:17.441 07:12:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:17.441 SPDK target shutdown done 00:07:17.441 07:12:49 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:17.441 INFO: relaunching applications... 00:07:17.441 07:12:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:17.441 07:12:49 json_config -- json_config/common.sh@9 -- # local app=target 00:07:17.441 07:12:49 json_config -- json_config/common.sh@10 -- # shift 00:07:17.441 07:12:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:17.441 07:12:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:17.441 07:12:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:17.441 07:12:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:17.441 07:12:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:17.441 07:12:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2513383 00:07:17.441 07:12:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:17.441 Waiting for target to run... 00:07:17.441 07:12:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:17.441 07:12:49 json_config -- json_config/common.sh@25 -- # waitforlisten 2513383 /var/tmp/spdk_tgt.sock 00:07:17.441 07:12:49 json_config -- common/autotest_common.sh@831 -- # '[' -z 2513383 ']' 00:07:17.441 07:12:49 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:17.441 07:12:49 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.442 07:12:49 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:17.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:17.442 07:12:49 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.442 07:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.442 [2024-07-25 07:12:49.928818] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:17.442 [2024-07-25 07:12:49.928897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513383 ] 00:07:17.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.010 [2024-07-25 07:12:50.384864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.010 [2024-07-25 07:12:50.468213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.296 [2024-07-25 07:12:53.534114] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15b11b0/0x15dd600) succeed. 00:07:21.296 [2024-07-25 07:12:53.545406] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b33a0/0x163d5e0) succeed. 00:07:21.296 [2024-07-25 07:12:53.599067] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:21.554 07:12:54 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.554 07:12:54 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:21.554 07:12:54 json_config -- json_config/common.sh@26 -- # echo '' 00:07:21.554 00:07:21.554 07:12:54 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:21.554 07:12:54 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:21.554 INFO: Checking if target configuration is the same... 00:07:21.554 07:12:54 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:21.554 07:12:54 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:21.554 07:12:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:21.812 + '[' 2 -ne 2 ']' 00:07:21.812 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:21.812 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:21.812 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:21.812 +++ basename /dev/fd/62 00:07:21.812 ++ mktemp /tmp/62.XXX 00:07:21.812 + tmp_file_1=/tmp/62.h8e 00:07:21.812 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:21.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:21.812 + tmp_file_2=/tmp/spdk_tgt_config.json.2TA 00:07:21.812 + ret=0 00:07:21.812 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.070 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.070 + diff -u /tmp/62.h8e /tmp/spdk_tgt_config.json.2TA 00:07:22.070 + echo 'INFO: JSON config files are the same' 00:07:22.070 INFO: JSON config files are the same 00:07:22.070 + rm /tmp/62.h8e /tmp/spdk_tgt_config.json.2TA 00:07:22.070 + exit 0 00:07:22.070 07:12:54 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:22.070 07:12:54 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:22.070 INFO: changing configuration and checking if this can be detected... 00:07:22.070 07:12:54 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:22.070 07:12:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:22.070 07:12:54 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:22.070 07:12:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.070 07:12:54 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.329 + '[' 2 -ne 2 ']' 00:07:22.329 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:22.329 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:22.329 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:22.329 +++ basename /dev/fd/62 00:07:22.329 ++ mktemp /tmp/62.XXX 00:07:22.329 + tmp_file_1=/tmp/62.Ztl 00:07:22.329 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.329 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:22.329 + tmp_file_2=/tmp/spdk_tgt_config.json.BZV 00:07:22.329 + ret=0 00:07:22.329 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.590 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.590 + diff -u /tmp/62.Ztl /tmp/spdk_tgt_config.json.BZV 00:07:22.590 + ret=1 00:07:22.590 + echo '=== Start of file: /tmp/62.Ztl ===' 00:07:22.590 + cat /tmp/62.Ztl 00:07:22.590 + echo '=== End of file: /tmp/62.Ztl ===' 00:07:22.590 + echo '' 00:07:22.590 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BZV ===' 00:07:22.590 + cat /tmp/spdk_tgt_config.json.BZV 00:07:22.590 + echo '=== End of file: /tmp/spdk_tgt_config.json.BZV ===' 00:07:22.590 + echo '' 00:07:22.590 + rm /tmp/62.Ztl /tmp/spdk_tgt_config.json.BZV 00:07:22.590 + exit 1 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:22.590 INFO: configuration change detected. 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 2513383 ]] 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:22.590 07:12:54 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.590 07:12:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.590 07:12:55 json_config -- json_config/json_config.sh@327 -- # killprocess 2513383 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@950 -- # '[' -z 2513383 ']' 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@954 -- # kill -0 2513383 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@955 -- # uname 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513383 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513383' 00:07:22.590 killing process with pid 2513383 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@969 -- # kill 2513383 00:07:22.590 07:12:55 json_config -- common/autotest_common.sh@974 -- # wait 2513383 00:07:25.128 07:12:57 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:25.128 07:12:57 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:25.128 07:12:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.128 07:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.128 07:12:57 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:25.128 07:12:57 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:25.128 INFO: Success 00:07:25.128 07:12:57 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@117 -- # sync 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.128 07:12:57 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:07:25.128 00:07:25.128 real 0m25.460s 00:07:25.128 user 0m28.111s 00:07:25.128 sys 0m8.671s 00:07:25.128 07:12:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.128 07:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.128 ************************************ 00:07:25.128 END TEST json_config 00:07:25.128 ************************************ 00:07:25.128 07:12:57 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:25.128 07:12:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.128 07:12:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.128 07:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:25.128 ************************************ 00:07:25.128 START TEST json_config_extra_key 00:07:25.128 ************************************ 00:07:25.128 07:12:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:25.388 07:12:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.388 07:12:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.388 07:12:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.388 07:12:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.388 07:12:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.388 07:12:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.388 07:12:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:25.388 07:12:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.388 07:12:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:25.388 INFO: launching applications... 00:07:25.388 07:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2514838 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:25.388 Waiting for target to run... 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2514838 /var/tmp/spdk_tgt.sock 00:07:25.388 07:12:57 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2514838 ']' 00:07:25.388 07:12:57 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:25.388 07:12:57 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:25.388 07:12:57 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.388 07:12:57 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:25.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:25.389 07:12:57 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.389 07:12:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 [2024-07-25 07:12:57.771313] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:25.389 [2024-07-25 07:12:57.771364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514838 ] 00:07:25.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.956 [2024-07-25 07:12:58.229510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.956 [2024-07-25 07:12:58.309192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.214 07:12:58 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.214 07:12:58 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:26.214 00:07:26.214 07:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:26.214 INFO: shutting down applications... 00:07:26.214 07:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2514838 ]] 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2514838 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2514838 00:07:26.214 07:12:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2514838 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:26.782 07:12:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:26.782 SPDK target shutdown done 00:07:26.782 07:12:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:26.782 Success 00:07:26.782 00:07:26.782 real 0m1.468s 00:07:26.782 user 0m1.028s 00:07:26.782 sys 0m0.582s 00:07:26.782 07:12:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.782 07:12:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:26.782 ************************************ 00:07:26.782 END TEST json_config_extra_key 00:07:26.782 ************************************ 00:07:26.782 07:12:59 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:26.782 07:12:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.782 07:12:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.782 07:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:26.782 ************************************ 00:07:26.782 START TEST alias_rpc 00:07:26.782 ************************************ 00:07:26.782 07:12:59 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:26.782 * Looking for test storage... 00:07:26.782 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:26.782 07:12:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.782 07:12:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2515160 00:07:26.782 07:12:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:26.782 07:12:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2515160 00:07:26.782 07:12:59 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2515160 ']' 00:07:26.782 07:12:59 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.783 07:12:59 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.783 07:12:59 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.783 07:12:59 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.783 07:12:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.783 [2024-07-25 07:12:59.309924] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:26.783 [2024-07-25 07:12:59.309976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515160 ] 00:07:27.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.042 [2024-07-25 07:12:59.394043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.042 [2024-07-25 07:12:59.467268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.610 07:13:00 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.610 07:13:00 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:27.610 07:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:27.868 07:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2515160 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2515160 ']' 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2515160 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515160 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515160' 00:07:27.869 killing process with pid 2515160 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@969 -- # kill 2515160 00:07:27.869 07:13:00 alias_rpc -- common/autotest_common.sh@974 -- # wait 2515160 00:07:28.437 00:07:28.437 real 0m1.505s 00:07:28.437 user 0m1.599s 00:07:28.437 sys 0m0.454s 00:07:28.437 07:13:00 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.437 07:13:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 ************************************ 00:07:28.437 END TEST alias_rpc 00:07:28.437 ************************************ 00:07:28.437 07:13:00 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:28.437 07:13:00 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:28.437 07:13:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.437 07:13:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.437 07:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 ************************************ 00:07:28.437 START TEST spdkcli_tcp 00:07:28.437 ************************************ 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:28.437 * Looking for test storage... 00:07:28.437 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2515479 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2515479 00:07:28.437 07:13:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2515479 ']' 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.437 07:13:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 [2024-07-25 07:13:00.879148] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:28.437 [2024-07-25 07:13:00.879205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515479 ] 00:07:28.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.437 [2024-07-25 07:13:00.963519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.695 [2024-07-25 07:13:01.038550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.695 [2024-07-25 07:13:01.038553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.263 07:13:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.263 07:13:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:29.263 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:29.263 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2515741 00:07:29.263 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:29.523 [ 00:07:29.523 "bdev_malloc_delete", 00:07:29.523 "bdev_malloc_create", 00:07:29.523 "bdev_null_resize", 00:07:29.523 "bdev_null_delete", 00:07:29.523 "bdev_null_create", 00:07:29.523 "bdev_nvme_cuse_unregister", 00:07:29.523 "bdev_nvme_cuse_register", 00:07:29.523 "bdev_opal_new_user", 00:07:29.523 "bdev_opal_set_lock_state", 00:07:29.523 "bdev_opal_delete", 00:07:29.523 "bdev_opal_get_info", 00:07:29.523 "bdev_opal_create", 00:07:29.523 "bdev_nvme_opal_revert", 00:07:29.523 "bdev_nvme_opal_init", 00:07:29.523 "bdev_nvme_send_cmd", 00:07:29.523 "bdev_nvme_get_path_iostat", 00:07:29.523 "bdev_nvme_get_mdns_discovery_info", 00:07:29.523 "bdev_nvme_stop_mdns_discovery", 00:07:29.523 "bdev_nvme_start_mdns_discovery", 00:07:29.523 "bdev_nvme_set_multipath_policy", 00:07:29.523 "bdev_nvme_set_preferred_path", 00:07:29.523 "bdev_nvme_get_io_paths", 00:07:29.523 "bdev_nvme_remove_error_injection", 00:07:29.523 "bdev_nvme_add_error_injection", 00:07:29.523 "bdev_nvme_get_discovery_info", 00:07:29.523 "bdev_nvme_stop_discovery", 00:07:29.523 "bdev_nvme_start_discovery", 00:07:29.523 "bdev_nvme_get_controller_health_info", 00:07:29.523 "bdev_nvme_disable_controller", 00:07:29.523 "bdev_nvme_enable_controller", 00:07:29.523 "bdev_nvme_reset_controller", 00:07:29.523 "bdev_nvme_get_transport_statistics", 00:07:29.523 "bdev_nvme_apply_firmware", 00:07:29.523 "bdev_nvme_detach_controller", 00:07:29.523 "bdev_nvme_get_controllers", 00:07:29.523 "bdev_nvme_attach_controller", 00:07:29.523 "bdev_nvme_set_hotplug", 00:07:29.523 "bdev_nvme_set_options", 00:07:29.523 "bdev_passthru_delete", 00:07:29.523 "bdev_passthru_create", 00:07:29.523 "bdev_lvol_set_parent_bdev", 00:07:29.523 "bdev_lvol_set_parent", 00:07:29.523 "bdev_lvol_check_shallow_copy", 00:07:29.523 "bdev_lvol_start_shallow_copy", 00:07:29.523 "bdev_lvol_grow_lvstore", 00:07:29.523 "bdev_lvol_get_lvols", 00:07:29.523 "bdev_lvol_get_lvstores", 00:07:29.523 "bdev_lvol_delete", 00:07:29.523 "bdev_lvol_set_read_only", 00:07:29.523 "bdev_lvol_resize", 00:07:29.523 "bdev_lvol_decouple_parent", 00:07:29.523 "bdev_lvol_inflate", 00:07:29.523 "bdev_lvol_rename", 00:07:29.523 "bdev_lvol_clone_bdev", 00:07:29.523 "bdev_lvol_clone", 00:07:29.523 "bdev_lvol_snapshot", 00:07:29.523 "bdev_lvol_create", 00:07:29.523 "bdev_lvol_delete_lvstore", 00:07:29.523 "bdev_lvol_rename_lvstore", 00:07:29.523 "bdev_lvol_create_lvstore", 00:07:29.523 "bdev_raid_set_options", 00:07:29.523 "bdev_raid_remove_base_bdev", 00:07:29.523 "bdev_raid_add_base_bdev", 00:07:29.523 "bdev_raid_delete", 00:07:29.523 "bdev_raid_create", 00:07:29.523 "bdev_raid_get_bdevs", 00:07:29.523 "bdev_error_inject_error", 00:07:29.523 "bdev_error_delete", 00:07:29.523 "bdev_error_create", 00:07:29.523 "bdev_split_delete", 00:07:29.523 "bdev_split_create", 00:07:29.523 "bdev_delay_delete", 00:07:29.523 "bdev_delay_create", 00:07:29.523 "bdev_delay_update_latency", 00:07:29.523 "bdev_zone_block_delete", 00:07:29.523 "bdev_zone_block_create", 00:07:29.523 "blobfs_create", 00:07:29.523 "blobfs_detect", 00:07:29.523 "blobfs_set_cache_size", 00:07:29.523 "bdev_aio_delete", 00:07:29.523 "bdev_aio_rescan", 00:07:29.523 "bdev_aio_create", 00:07:29.523 "bdev_ftl_set_property", 00:07:29.523 "bdev_ftl_get_properties", 00:07:29.523 "bdev_ftl_get_stats", 00:07:29.523 "bdev_ftl_unmap", 00:07:29.523 "bdev_ftl_unload", 00:07:29.523 "bdev_ftl_delete", 00:07:29.523 "bdev_ftl_load", 00:07:29.523 "bdev_ftl_create", 00:07:29.523 "bdev_virtio_attach_controller", 00:07:29.523 "bdev_virtio_scsi_get_devices", 00:07:29.523 "bdev_virtio_detach_controller", 00:07:29.523 "bdev_virtio_blk_set_hotplug", 00:07:29.523 "bdev_iscsi_delete", 00:07:29.523 "bdev_iscsi_create", 00:07:29.523 "bdev_iscsi_set_options", 00:07:29.523 "accel_error_inject_error", 00:07:29.523 "ioat_scan_accel_module", 00:07:29.523 "dsa_scan_accel_module", 00:07:29.523 "iaa_scan_accel_module", 00:07:29.523 "keyring_file_remove_key", 00:07:29.524 "keyring_file_add_key", 00:07:29.524 "keyring_linux_set_options", 00:07:29.524 "iscsi_get_histogram", 00:07:29.524 "iscsi_enable_histogram", 00:07:29.524 "iscsi_set_options", 00:07:29.524 "iscsi_get_auth_groups", 00:07:29.524 "iscsi_auth_group_remove_secret", 00:07:29.524 "iscsi_auth_group_add_secret", 00:07:29.524 "iscsi_delete_auth_group", 00:07:29.524 "iscsi_create_auth_group", 00:07:29.524 "iscsi_set_discovery_auth", 00:07:29.524 "iscsi_get_options", 00:07:29.524 "iscsi_target_node_request_logout", 00:07:29.524 "iscsi_target_node_set_redirect", 00:07:29.524 "iscsi_target_node_set_auth", 00:07:29.524 "iscsi_target_node_add_lun", 00:07:29.524 "iscsi_get_stats", 00:07:29.524 "iscsi_get_connections", 00:07:29.524 "iscsi_portal_group_set_auth", 00:07:29.524 "iscsi_start_portal_group", 00:07:29.524 "iscsi_delete_portal_group", 00:07:29.524 "iscsi_create_portal_group", 00:07:29.524 "iscsi_get_portal_groups", 00:07:29.524 "iscsi_delete_target_node", 00:07:29.524 "iscsi_target_node_remove_pg_ig_maps", 00:07:29.524 "iscsi_target_node_add_pg_ig_maps", 00:07:29.524 "iscsi_create_target_node", 00:07:29.524 "iscsi_get_target_nodes", 00:07:29.524 "iscsi_delete_initiator_group", 00:07:29.524 "iscsi_initiator_group_remove_initiators", 00:07:29.524 "iscsi_initiator_group_add_initiators", 00:07:29.524 "iscsi_create_initiator_group", 00:07:29.524 "iscsi_get_initiator_groups", 00:07:29.524 "nvmf_set_crdt", 00:07:29.524 "nvmf_set_config", 00:07:29.524 "nvmf_set_max_subsystems", 00:07:29.524 "nvmf_stop_mdns_prr", 00:07:29.524 "nvmf_publish_mdns_prr", 00:07:29.524 "nvmf_subsystem_get_listeners", 00:07:29.524 "nvmf_subsystem_get_qpairs", 00:07:29.524 "nvmf_subsystem_get_controllers", 00:07:29.524 "nvmf_get_stats", 00:07:29.524 "nvmf_get_transports", 00:07:29.524 "nvmf_create_transport", 00:07:29.524 "nvmf_get_targets", 00:07:29.524 "nvmf_delete_target", 00:07:29.524 "nvmf_create_target", 00:07:29.524 "nvmf_subsystem_allow_any_host", 00:07:29.524 "nvmf_subsystem_remove_host", 00:07:29.524 "nvmf_subsystem_add_host", 00:07:29.524 "nvmf_ns_remove_host", 00:07:29.524 "nvmf_ns_add_host", 00:07:29.524 "nvmf_subsystem_remove_ns", 00:07:29.524 "nvmf_subsystem_add_ns", 00:07:29.524 "nvmf_subsystem_listener_set_ana_state", 00:07:29.524 "nvmf_discovery_get_referrals", 00:07:29.524 "nvmf_discovery_remove_referral", 00:07:29.524 "nvmf_discovery_add_referral", 00:07:29.524 "nvmf_subsystem_remove_listener", 00:07:29.524 "nvmf_subsystem_add_listener", 00:07:29.524 "nvmf_delete_subsystem", 00:07:29.524 "nvmf_create_subsystem", 00:07:29.524 "nvmf_get_subsystems", 00:07:29.524 "env_dpdk_get_mem_stats", 00:07:29.524 "nbd_get_disks", 00:07:29.524 "nbd_stop_disk", 00:07:29.524 "nbd_start_disk", 00:07:29.524 "ublk_recover_disk", 00:07:29.524 "ublk_get_disks", 00:07:29.524 "ublk_stop_disk", 00:07:29.524 "ublk_start_disk", 00:07:29.524 "ublk_destroy_target", 00:07:29.524 "ublk_create_target", 00:07:29.524 "virtio_blk_create_transport", 00:07:29.524 "virtio_blk_get_transports", 00:07:29.524 "vhost_controller_set_coalescing", 00:07:29.524 "vhost_get_controllers", 00:07:29.524 "vhost_delete_controller", 00:07:29.524 "vhost_create_blk_controller", 00:07:29.524 "vhost_scsi_controller_remove_target", 00:07:29.524 "vhost_scsi_controller_add_target", 00:07:29.524 "vhost_start_scsi_controller", 00:07:29.524 "vhost_create_scsi_controller", 00:07:29.524 "thread_set_cpumask", 00:07:29.524 "scheduler_set_options", 00:07:29.524 "framework_get_governor", 00:07:29.524 "framework_get_scheduler", 00:07:29.524 "framework_set_scheduler", 00:07:29.524 "framework_get_reactors", 00:07:29.524 "thread_get_io_channels", 00:07:29.524 "thread_get_pollers", 00:07:29.524 "thread_get_stats", 00:07:29.524 "framework_monitor_context_switch", 00:07:29.524 "spdk_kill_instance", 00:07:29.524 "log_enable_timestamps", 00:07:29.524 "log_get_flags", 00:07:29.524 "log_clear_flag", 00:07:29.524 "log_set_flag", 00:07:29.524 "log_get_level", 00:07:29.524 "log_set_level", 00:07:29.524 "log_get_print_level", 00:07:29.524 "log_set_print_level", 00:07:29.524 "framework_enable_cpumask_locks", 00:07:29.524 "framework_disable_cpumask_locks", 00:07:29.524 "framework_wait_init", 00:07:29.524 "framework_start_init", 00:07:29.524 "scsi_get_devices", 00:07:29.524 "bdev_get_histogram", 00:07:29.524 "bdev_enable_histogram", 00:07:29.524 "bdev_set_qos_limit", 00:07:29.524 "bdev_set_qd_sampling_period", 00:07:29.524 "bdev_get_bdevs", 00:07:29.524 "bdev_reset_iostat", 00:07:29.524 "bdev_get_iostat", 00:07:29.524 "bdev_examine", 00:07:29.524 "bdev_wait_for_examine", 00:07:29.524 "bdev_set_options", 00:07:29.524 "notify_get_notifications", 00:07:29.524 "notify_get_types", 00:07:29.524 "accel_get_stats", 00:07:29.524 "accel_set_options", 00:07:29.524 "accel_set_driver", 00:07:29.524 "accel_crypto_key_destroy", 00:07:29.524 "accel_crypto_keys_get", 00:07:29.524 "accel_crypto_key_create", 00:07:29.524 "accel_assign_opc", 00:07:29.524 "accel_get_module_info", 00:07:29.524 "accel_get_opc_assignments", 00:07:29.524 "vmd_rescan", 00:07:29.524 "vmd_remove_device", 00:07:29.524 "vmd_enable", 00:07:29.524 "sock_get_default_impl", 00:07:29.524 "sock_set_default_impl", 00:07:29.524 "sock_impl_set_options", 00:07:29.524 "sock_impl_get_options", 00:07:29.524 "iobuf_get_stats", 00:07:29.524 "iobuf_set_options", 00:07:29.524 "framework_get_pci_devices", 00:07:29.524 "framework_get_config", 00:07:29.524 "framework_get_subsystems", 00:07:29.524 "trace_get_info", 00:07:29.524 "trace_get_tpoint_group_mask", 00:07:29.524 "trace_disable_tpoint_group", 00:07:29.524 "trace_enable_tpoint_group", 00:07:29.524 "trace_clear_tpoint_mask", 00:07:29.524 "trace_set_tpoint_mask", 00:07:29.524 "keyring_get_keys", 00:07:29.524 "spdk_get_version", 00:07:29.524 "rpc_get_methods" 00:07:29.524 ] 00:07:29.524 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.524 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:29.524 07:13:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2515479 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2515479 ']' 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2515479 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515479 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515479' 00:07:29.524 killing process with pid 2515479 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2515479 00:07:29.524 07:13:01 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2515479 00:07:29.782 00:07:29.782 real 0m1.540s 00:07:29.782 user 0m2.783s 00:07:29.782 sys 0m0.494s 00:07:29.782 07:13:02 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.782 07:13:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 ************************************ 00:07:29.782 END TEST spdkcli_tcp 00:07:29.782 ************************************ 00:07:29.782 07:13:02 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:29.782 07:13:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.782 07:13:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.782 07:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:30.040 ************************************ 00:07:30.040 START TEST dpdk_mem_utility 00:07:30.040 ************************************ 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:30.040 * Looking for test storage... 00:07:30.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:30.040 07:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:30.040 07:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2515834 00:07:30.040 07:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2515834 00:07:30.040 07:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2515834 ']' 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.040 07:13:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:30.040 [2024-07-25 07:13:02.505193] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:30.040 [2024-07-25 07:13:02.505254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515834 ] 00:07:30.040 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.299 [2024-07-25 07:13:02.589382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.299 [2024-07-25 07:13:02.659155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.866 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.866 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:30.866 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:30.866 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:30.866 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.866 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:30.866 { 00:07:30.866 "filename": "/tmp/spdk_mem_dump.txt" 00:07:30.866 } 00:07:30.866 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.866 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:30.866 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:30.866 1 heaps totaling size 814.000000 MiB 00:07:30.866 size: 814.000000 MiB heap id: 0 00:07:30.866 end heaps---------- 00:07:30.866 8 mempools totaling size 598.116089 MiB 00:07:30.866 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:30.866 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:30.866 size: 84.521057 MiB name: bdev_io_2515834 00:07:30.866 size: 51.011292 MiB name: evtpool_2515834 00:07:30.866 size: 50.003479 MiB name: msgpool_2515834 00:07:30.866 size: 21.763794 MiB name: PDU_Pool 00:07:30.866 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:30.866 size: 0.026123 MiB name: Session_Pool 00:07:30.866 end mempools------- 00:07:30.866 6 memzones totaling size 4.142822 MiB 00:07:30.866 size: 1.000366 MiB name: RG_ring_0_2515834 00:07:30.866 size: 1.000366 MiB name: RG_ring_1_2515834 00:07:30.866 size: 1.000366 MiB name: RG_ring_4_2515834 00:07:30.866 size: 1.000366 MiB name: RG_ring_5_2515834 00:07:30.866 size: 0.125366 MiB name: RG_ring_2_2515834 00:07:30.866 size: 0.015991 MiB name: RG_ring_3_2515834 00:07:30.866 end memzones------- 00:07:30.866 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:31.125 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:31.125 list of free elements. size: 12.519348 MiB 00:07:31.125 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:31.125 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:31.125 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:31.125 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:31.125 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:31.125 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:31.125 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:31.125 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:31.125 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:31.125 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:31.125 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:31.125 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:31.125 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:31.125 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:31.125 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:31.125 list of standard malloc elements. size: 199.218079 MiB 00:07:31.125 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:31.125 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:31.125 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:31.125 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:31.125 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:31.125 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:31.125 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:31.125 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:31.125 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:31.125 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:31.125 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:31.125 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:31.125 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:31.125 list of memzone associated elements. size: 602.262573 MiB 00:07:31.125 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:31.125 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:31.125 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:31.125 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:31.125 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:31.125 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2515834_0 00:07:31.125 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:31.125 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2515834_0 00:07:31.125 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:31.125 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2515834_0 00:07:31.125 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:31.125 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:31.125 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:31.125 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:31.125 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:31.125 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2515834 00:07:31.125 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:31.125 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2515834 00:07:31.125 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:31.125 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2515834 00:07:31.125 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:31.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:31.125 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:31.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:31.125 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:31.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:31.125 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:31.125 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:31.125 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:31.125 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2515834 00:07:31.125 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:31.125 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2515834 00:07:31.125 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:31.125 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2515834 00:07:31.125 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:31.125 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2515834 00:07:31.125 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:31.125 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2515834 00:07:31.125 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:31.125 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:31.125 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:31.125 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:31.125 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:31.125 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:31.125 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:31.125 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2515834 00:07:31.125 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:31.125 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:31.125 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:31.125 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:31.125 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:31.125 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2515834 00:07:31.125 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:31.125 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:31.125 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:31.125 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2515834 00:07:31.125 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:31.126 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2515834 00:07:31.126 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:31.126 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:31.126 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:31.126 07:13:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2515834 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2515834 ']' 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2515834 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515834 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515834' 00:07:31.126 killing process with pid 2515834 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2515834 00:07:31.126 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2515834 00:07:31.385 00:07:31.385 real 0m1.420s 00:07:31.385 user 0m1.424s 00:07:31.385 sys 0m0.470s 00:07:31.385 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.385 07:13:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:31.385 ************************************ 00:07:31.385 END TEST dpdk_mem_utility 00:07:31.385 ************************************ 00:07:31.385 07:13:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:31.385 07:13:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.385 07:13:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.385 07:13:03 -- common/autotest_common.sh@10 -- # set +x 00:07:31.385 ************************************ 00:07:31.385 START TEST event 00:07:31.385 ************************************ 00:07:31.385 07:13:03 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:31.644 * Looking for test storage... 00:07:31.644 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:31.644 07:13:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:31.644 07:13:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:31.644 07:13:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:31.644 07:13:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:31.644 07:13:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.644 07:13:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.644 ************************************ 00:07:31.644 START TEST event_perf 00:07:31.644 ************************************ 00:07:31.644 07:13:03 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:31.644 Running I/O for 1 seconds...[2024-07-25 07:13:04.003152] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:31.644 [2024-07-25 07:13:04.003218] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516153 ] 00:07:31.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.644 [2024-07-25 07:13:04.090830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.644 [2024-07-25 07:13:04.163737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.644 [2024-07-25 07:13:04.163835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.644 [2024-07-25 07:13:04.163908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.644 [2024-07-25 07:13:04.163910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.111 Running I/O for 1 seconds... 00:07:33.111 lcore 0: 213452 00:07:33.111 lcore 1: 213452 00:07:33.111 lcore 2: 213452 00:07:33.111 lcore 3: 213453 00:07:33.111 done. 00:07:33.111 00:07:33.111 real 0m1.251s 00:07:33.111 user 0m4.141s 00:07:33.111 sys 0m0.106s 00:07:33.111 07:13:05 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.111 07:13:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 ************************************ 00:07:33.111 END TEST event_perf 00:07:33.111 ************************************ 00:07:33.111 07:13:05 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:33.111 07:13:05 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:33.111 07:13:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.111 07:13:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 ************************************ 00:07:33.111 START TEST event_reactor 00:07:33.111 ************************************ 00:07:33.111 07:13:05 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:33.111 [2024-07-25 07:13:05.339529] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:33.111 [2024-07-25 07:13:05.339601] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516434 ] 00:07:33.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.112 [2024-07-25 07:13:05.426551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.112 [2024-07-25 07:13:05.493092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.049 test_start 00:07:34.049 oneshot 00:07:34.049 tick 100 00:07:34.049 tick 100 00:07:34.049 tick 250 00:07:34.049 tick 100 00:07:34.049 tick 100 00:07:34.049 tick 100 00:07:34.049 tick 500 00:07:34.049 tick 250 00:07:34.049 tick 100 00:07:34.049 tick 100 00:07:34.049 tick 250 00:07:34.049 tick 100 00:07:34.049 tick 100 00:07:34.049 test_end 00:07:34.049 00:07:34.049 real 0m1.243s 00:07:34.049 user 0m1.136s 00:07:34.049 sys 0m0.103s 00:07:34.049 07:13:06 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.049 07:13:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:34.049 ************************************ 00:07:34.049 END TEST event_reactor 00:07:34.049 ************************************ 00:07:34.308 07:13:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:34.308 07:13:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.308 07:13:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.308 07:13:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.308 ************************************ 00:07:34.308 START TEST event_reactor_perf 00:07:34.308 ************************************ 00:07:34.308 07:13:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:34.308 [2024-07-25 07:13:06.662079] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:34.308 [2024-07-25 07:13:06.662163] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516715 ] 00:07:34.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.308 [2024-07-25 07:13:06.748623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.308 [2024-07-25 07:13:06.817105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.688 test_start 00:07:35.688 test_end 00:07:35.688 Performance: 522214 events per second 00:07:35.688 00:07:35.688 real 0m1.245s 00:07:35.688 user 0m1.142s 00:07:35.688 sys 0m0.099s 00:07:35.688 07:13:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.688 07:13:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.688 ************************************ 00:07:35.688 END TEST event_reactor_perf 00:07:35.688 ************************************ 00:07:35.688 07:13:07 event -- event/event.sh@49 -- # uname -s 00:07:35.688 07:13:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:35.688 07:13:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:35.688 07:13:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.688 07:13:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.688 07:13:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.688 ************************************ 00:07:35.688 START TEST event_scheduler 00:07:35.688 ************************************ 00:07:35.688 07:13:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:35.688 * Looking for test storage... 00:07:35.688 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:35.688 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:35.688 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2517021 00:07:35.688 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.688 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:35.688 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2517021 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2517021 ']' 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.688 07:13:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:35.688 [2024-07-25 07:13:08.126004] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:35.688 [2024-07-25 07:13:08.126055] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517021 ] 00:07:35.688 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.688 [2024-07-25 07:13:08.208143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.947 [2024-07-25 07:13:08.281613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.947 [2024-07-25 07:13:08.281701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.947 [2024-07-25 07:13:08.281722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.947 [2024-07-25 07:13:08.281724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:36.516 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.516 [2024-07-25 07:13:08.944085] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:36.516 [2024-07-25 07:13:08.944106] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:36.516 [2024-07-25 07:13:08.944117] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:36.516 [2024-07-25 07:13:08.944125] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:36.516 [2024-07-25 07:13:08.944132] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.516 07:13:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.516 07:13:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.516 [2024-07-25 07:13:09.016438] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:36.516 07:13:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.516 07:13:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:36.516 07:13:09 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.516 07:13:09 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.516 07:13:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 ************************************ 00:07:36.776 START TEST scheduler_create_thread 00:07:36.776 ************************************ 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 2 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 3 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 4 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 5 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 6 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 7 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 8 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 9 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 10 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.776 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.344 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.344 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:37.344 07:13:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:37.344 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.344 07:13:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.281 07:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.281 00:07:38.281 real 0m1.754s 00:07:38.281 user 0m0.012s 00:07:38.281 sys 0m0.005s 00:07:38.281 07:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.281 07:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.281 ************************************ 00:07:38.281 END TEST scheduler_create_thread 00:07:38.281 ************************************ 00:07:38.540 07:13:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:38.540 07:13:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2517021 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2517021 ']' 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2517021 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517021 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517021' 00:07:38.540 killing process with pid 2517021 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2517021 00:07:38.540 07:13:10 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2517021 00:07:38.800 [2024-07-25 07:13:11.291700] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:39.060 00:07:39.060 real 0m3.492s 00:07:39.060 user 0m6.176s 00:07:39.060 sys 0m0.450s 00:07:39.060 07:13:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.060 07:13:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:39.060 ************************************ 00:07:39.060 END TEST event_scheduler 00:07:39.060 ************************************ 00:07:39.060 07:13:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:39.060 07:13:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:39.060 07:13:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.060 07:13:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.060 07:13:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.060 ************************************ 00:07:39.060 START TEST app_repeat 00:07:39.060 ************************************ 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2517619 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2517619' 00:07:39.060 Process app_repeat pid: 2517619 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:39.060 spdk_app_start Round 0 00:07:39.060 07:13:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2517619 /var/tmp/spdk-nbd.sock 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2517619 ']' 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.060 07:13:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.319 [2024-07-25 07:13:11.594191] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:39.319 [2024-07-25 07:13:11.594252] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517619 ] 00:07:39.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.319 [2024-07-25 07:13:11.678674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.319 [2024-07-25 07:13:11.748489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.319 [2024-07-25 07:13:11.748492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.887 07:13:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.887 07:13:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:39.887 07:13:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.146 Malloc0 00:07:40.146 07:13:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.406 Malloc1 00:07:40.406 07:13:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.406 07:13:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:40.664 /dev/nbd0 00:07:40.664 07:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:40.664 07:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.665 1+0 records in 00:07:40.665 1+0 records out 00:07:40.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264292 s, 15.5 MB/s 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.665 07:13:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:40.665 07:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.665 07:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.665 07:13:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:40.665 /dev/nbd1 00:07:40.665 07:13:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.665 07:13:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.665 1+0 records in 00:07:40.665 1+0 records out 00:07:40.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168454 s, 24.3 MB/s 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:40.665 07:13:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:40.923 07:13:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.923 07:13:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.923 { 00:07:40.923 "nbd_device": "/dev/nbd0", 00:07:40.923 "bdev_name": "Malloc0" 00:07:40.923 }, 00:07:40.923 { 00:07:40.923 "nbd_device": "/dev/nbd1", 00:07:40.923 "bdev_name": "Malloc1" 00:07:40.923 } 00:07:40.923 ]' 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.923 { 00:07:40.923 "nbd_device": "/dev/nbd0", 00:07:40.923 "bdev_name": "Malloc0" 00:07:40.923 }, 00:07:40.923 { 00:07:40.923 "nbd_device": "/dev/nbd1", 00:07:40.923 "bdev_name": "Malloc1" 00:07:40.923 } 00:07:40.923 ]' 00:07:40.923 07:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.924 /dev/nbd1' 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.924 /dev/nbd1' 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.924 256+0 records in 00:07:40.924 256+0 records out 00:07:40.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108026 s, 97.1 MB/s 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.924 07:13:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.183 256+0 records in 00:07:41.183 256+0 records out 00:07:41.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194751 s, 53.8 MB/s 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.183 256+0 records in 00:07:41.183 256+0 records out 00:07:41.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178276 s, 58.8 MB/s 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.183 07:13:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.442 07:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.702 07:13:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.702 07:13:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.961 07:13:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.221 [2024-07-25 07:13:14.509078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.221 [2024-07-25 07:13:14.572157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.221 [2024-07-25 07:13:14.572161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.221 [2024-07-25 07:13:14.612697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.221 [2024-07-25 07:13:14.612739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:45.511 spdk_app_start Round 1 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2517619 /var/tmp/spdk-nbd.sock 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2517619 ']' 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.511 07:13:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.511 Malloc0 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.511 Malloc1 00:07:45.511 07:13:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.511 07:13:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.511 /dev/nbd0 00:07:45.511 07:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.511 07:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.511 07:13:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:45.511 07:13:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:45.511 07:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.511 07:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.511 07:13:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.770 1+0 records in 00:07:45.770 1+0 records out 00:07:45.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016471 s, 24.9 MB/s 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:45.770 07:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.770 07:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.770 07:13:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.770 /dev/nbd1 00:07:45.770 07:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.770 07:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.770 07:13:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.770 1+0 records in 00:07:45.770 1+0 records out 00:07:45.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246561 s, 16.6 MB/s 00:07:45.771 07:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:45.771 07:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:45.771 07:13:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:45.771 07:13:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.771 07:13:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:45.771 07:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.771 07:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.771 07:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.771 07:13:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.771 07:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.030 { 00:07:46.030 "nbd_device": "/dev/nbd0", 00:07:46.030 "bdev_name": "Malloc0" 00:07:46.030 }, 00:07:46.030 { 00:07:46.030 "nbd_device": "/dev/nbd1", 00:07:46.030 "bdev_name": "Malloc1" 00:07:46.030 } 00:07:46.030 ]' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.030 { 00:07:46.030 "nbd_device": "/dev/nbd0", 00:07:46.030 "bdev_name": "Malloc0" 00:07:46.030 }, 00:07:46.030 { 00:07:46.030 "nbd_device": "/dev/nbd1", 00:07:46.030 "bdev_name": "Malloc1" 00:07:46.030 } 00:07:46.030 ]' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.030 /dev/nbd1' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.030 /dev/nbd1' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:46.030 256+0 records in 00:07:46.030 256+0 records out 00:07:46.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482647 s, 217 MB/s 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.030 256+0 records in 00:07:46.030 256+0 records out 00:07:46.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195502 s, 53.6 MB/s 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.030 07:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.289 256+0 records in 00:07:46.289 256+0 records out 00:07:46.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208372 s, 50.3 MB/s 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:46.289 07:13:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.290 07:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.549 07:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.808 07:13:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.808 07:13:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.067 07:13:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.068 [2024-07-25 07:13:19.585382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.327 [2024-07-25 07:13:19.652562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.327 [2024-07-25 07:13:19.652565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.327 [2024-07-25 07:13:19.694174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:47.327 [2024-07-25 07:13:19.694218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:50.686 spdk_app_start Round 2 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2517619 /var/tmp/spdk-nbd.sock 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2517619 ']' 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.686 07:13:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.686 Malloc0 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.686 Malloc1 00:07:50.686 07:13:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.686 07:13:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:50.686 /dev/nbd0 00:07:50.686 07:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.686 07:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.686 1+0 records in 00:07:50.686 1+0 records out 00:07:50.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261236 s, 15.7 MB/s 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.686 07:13:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:50.686 07:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.686 07:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.686 07:13:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:50.946 /dev/nbd1 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.946 1+0 records in 00:07:50.946 1+0 records out 00:07:50.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248397 s, 16.5 MB/s 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.946 07:13:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.946 07:13:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.205 { 00:07:51.205 "nbd_device": "/dev/nbd0", 00:07:51.205 "bdev_name": "Malloc0" 00:07:51.205 }, 00:07:51.205 { 00:07:51.205 "nbd_device": "/dev/nbd1", 00:07:51.205 "bdev_name": "Malloc1" 00:07:51.205 } 00:07:51.205 ]' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.205 { 00:07:51.205 "nbd_device": "/dev/nbd0", 00:07:51.205 "bdev_name": "Malloc0" 00:07:51.205 }, 00:07:51.205 { 00:07:51.205 "nbd_device": "/dev/nbd1", 00:07:51.205 "bdev_name": "Malloc1" 00:07:51.205 } 00:07:51.205 ]' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:51.205 /dev/nbd1' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:51.205 /dev/nbd1' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:51.205 256+0 records in 00:07:51.205 256+0 records out 00:07:51.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115556 s, 90.7 MB/s 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:51.205 256+0 records in 00:07:51.205 256+0 records out 00:07:51.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197625 s, 53.1 MB/s 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.205 256+0 records in 00:07:51.205 256+0 records out 00:07:51.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201034 s, 52.2 MB/s 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.205 07:13:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.464 07:13:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.723 07:13:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.724 07:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.983 07:13:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.983 07:13:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:51.983 07:13:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:52.242 [2024-07-25 07:13:24.668234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.242 [2024-07-25 07:13:24.730954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.243 [2024-07-25 07:13:24.730958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.243 [2024-07-25 07:13:24.771744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.243 [2024-07-25 07:13:24.771799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:55.532 07:13:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2517619 /var/tmp/spdk-nbd.sock 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2517619 ']' 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:55.532 07:13:27 event.app_repeat -- event/event.sh@39 -- # killprocess 2517619 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2517619 ']' 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2517619 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517619 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517619' 00:07:55.532 killing process with pid 2517619 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2517619 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2517619 00:07:55.532 spdk_app_start is called in Round 0. 00:07:55.532 Shutdown signal received, stop current app iteration 00:07:55.532 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:55.532 spdk_app_start is called in Round 1. 00:07:55.532 Shutdown signal received, stop current app iteration 00:07:55.532 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:55.532 spdk_app_start is called in Round 2. 00:07:55.532 Shutdown signal received, stop current app iteration 00:07:55.532 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:55.532 spdk_app_start is called in Round 3. 00:07:55.532 Shutdown signal received, stop current app iteration 00:07:55.532 07:13:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:55.532 07:13:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:55.532 00:07:55.532 real 0m16.315s 00:07:55.532 user 0m34.567s 00:07:55.532 sys 0m3.159s 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.532 07:13:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.532 ************************************ 00:07:55.532 END TEST app_repeat 00:07:55.532 ************************************ 00:07:55.532 07:13:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:55.532 07:13:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:55.532 07:13:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.532 07:13:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.532 07:13:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.532 ************************************ 00:07:55.532 START TEST cpu_locks 00:07:55.532 ************************************ 00:07:55.532 07:13:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:55.532 * Looking for test storage... 00:07:55.532 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:55.532 07:13:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:55.532 07:13:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:55.532 07:13:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:55.791 07:13:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:55.791 07:13:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.791 07:13:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.791 07:13:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.791 ************************************ 00:07:55.791 START TEST default_locks 00:07:55.791 ************************************ 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2520772 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2520772 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2520772 ']' 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.791 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.791 [2024-07-25 07:13:28.152772] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:55.792 [2024-07-25 07:13:28.152825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520772 ] 00:07:55.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.792 [2024-07-25 07:13:28.234162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.792 [2024-07-25 07:13:28.302316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.728 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.728 07:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:56.728 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2520772 00:07:56.728 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2520772 00:07:56.728 07:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.296 lslocks: write error 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2520772 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2520772 ']' 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2520772 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2520772 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2520772' 00:07:57.296 killing process with pid 2520772 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2520772 00:07:57.296 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2520772 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2520772 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2520772 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2520772 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2520772 ']' 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.556 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2520772) - No such process 00:07:57.556 ERROR: process (pid: 2520772) is no longer running 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:57.556 00:07:57.556 real 0m1.794s 00:07:57.556 user 0m1.856s 00:07:57.556 sys 0m0.685s 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.556 07:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.556 ************************************ 00:07:57.556 END TEST default_locks 00:07:57.556 ************************************ 00:07:57.556 07:13:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:57.556 07:13:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.556 07:13:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.556 07:13:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.556 ************************************ 00:07:57.556 START TEST default_locks_via_rpc 00:07:57.556 ************************************ 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2521074 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2521074 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2521074 ']' 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.556 07:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.556 [2024-07-25 07:13:30.036756] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:57.556 [2024-07-25 07:13:30.036809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521074 ] 00:07:57.556 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.814 [2024-07-25 07:13:30.121320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.814 [2024-07-25 07:13:30.191887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2521074 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2521074 00:07:58.382 07:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2521074 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2521074 ']' 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2521074 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521074 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521074' 00:07:58.950 killing process with pid 2521074 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2521074 00:07:58.950 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2521074 00:07:59.519 00:07:59.519 real 0m1.783s 00:07:59.519 user 0m1.864s 00:07:59.519 sys 0m0.644s 00:07:59.519 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.519 07:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.519 ************************************ 00:07:59.519 END TEST default_locks_via_rpc 00:07:59.519 ************************************ 00:07:59.519 07:13:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:59.519 07:13:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.519 07:13:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.519 07:13:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.519 ************************************ 00:07:59.519 START TEST non_locking_app_on_locked_coremask 00:07:59.519 ************************************ 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2521424 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2521424 /var/tmp/spdk.sock 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2521424 ']' 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.519 07:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.519 [2024-07-25 07:13:31.896153] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:59.519 [2024-07-25 07:13:31.896200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521424 ] 00:07:59.519 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.519 [2024-07-25 07:13:31.977854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.778 [2024-07-25 07:13:32.050864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2521635 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2521635 /var/tmp/spdk2.sock 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2521635 ']' 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.346 07:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.346 [2024-07-25 07:13:32.722331] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:00.346 [2024-07-25 07:13:32.722382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521635 ] 00:08:00.346 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.346 [2024-07-25 07:13:32.840389] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.346 [2024-07-25 07:13:32.840415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.605 [2024-07-25 07:13:32.980427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.173 07:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.173 07:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:01.173 07:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2521424 00:08:01.173 07:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2521424 00:08:01.173 07:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.550 lslocks: write error 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2521424 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2521424 ']' 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2521424 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521424 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521424' 00:08:02.550 killing process with pid 2521424 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2521424 00:08:02.550 07:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2521424 00:08:02.809 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2521635 00:08:02.809 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2521635 ']' 00:08:02.809 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2521635 00:08:02.809 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521635 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521635' 00:08:03.069 killing process with pid 2521635 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2521635 00:08:03.069 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2521635 00:08:03.329 00:08:03.329 real 0m3.847s 00:08:03.329 user 0m4.086s 00:08:03.329 sys 0m1.313s 00:08:03.329 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.329 07:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.329 ************************************ 00:08:03.329 END TEST non_locking_app_on_locked_coremask 00:08:03.329 ************************************ 00:08:03.329 07:13:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:03.329 07:13:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.329 07:13:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.329 07:13:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.329 ************************************ 00:08:03.329 START TEST locking_app_on_unlocked_coremask 00:08:03.329 ************************************ 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2522202 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2522202 /var/tmp/spdk.sock 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2522202 ']' 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.329 07:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.329 [2024-07-25 07:13:35.830922] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:03.329 [2024-07-25 07:13:35.830970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522202 ] 00:08:03.588 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.588 [2024-07-25 07:13:35.911811] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.588 [2024-07-25 07:13:35.911837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.588 [2024-07-25 07:13:35.975717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2522320 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2522320 /var/tmp/spdk2.sock 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2522320 ']' 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.157 07:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.157 [2024-07-25 07:13:36.680596] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:04.157 [2024-07-25 07:13:36.680664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522320 ] 00:08:04.416 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.416 [2024-07-25 07:13:36.798383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.416 [2024-07-25 07:13:36.939964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.985 07:13:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.985 07:13:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.985 07:13:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2522320 00:08:04.985 07:13:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:04.985 07:13:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2522320 00:08:06.361 lslocks: write error 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2522202 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2522202 ']' 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2522202 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2522202 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2522202' 00:08:06.361 killing process with pid 2522202 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2522202 00:08:06.361 07:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2522202 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2522320 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2522320 ']' 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2522320 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2522320 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2522320' 00:08:06.930 killing process with pid 2522320 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2522320 00:08:06.930 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2522320 00:08:07.232 00:08:07.232 real 0m3.810s 00:08:07.232 user 0m4.064s 00:08:07.232 sys 0m1.305s 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.232 ************************************ 00:08:07.232 END TEST locking_app_on_unlocked_coremask 00:08:07.232 ************************************ 00:08:07.232 07:13:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:07.232 07:13:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.232 07:13:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.232 07:13:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.232 ************************************ 00:08:07.232 START TEST locking_app_on_locked_coremask 00:08:07.232 ************************************ 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2522863 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2522863 /var/tmp/spdk.sock 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2522863 ']' 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.232 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.233 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.233 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.233 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.233 07:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 [2024-07-25 07:13:39.695948] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:07.233 [2024-07-25 07:13:39.695992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522863 ] 00:08:07.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.495 [2024-07-25 07:13:39.779898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.495 [2024-07-25 07:13:39.853770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2523046 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2523046 /var/tmp/spdk2.sock 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2523046 /var/tmp/spdk2.sock 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2523046 /var/tmp/spdk2.sock 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2523046 ']' 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.064 07:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.064 [2024-07-25 07:13:40.537620] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:08.064 [2024-07-25 07:13:40.537681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523046 ] 00:08:08.064 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.323 [2024-07-25 07:13:40.653092] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2522863 has claimed it. 00:08:08.323 [2024-07-25 07:13:40.653125] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:08.892 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2523046) - No such process 00:08:08.892 ERROR: process (pid: 2523046) is no longer running 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2522863 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2522863 00:08:08.892 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.461 lslocks: write error 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2522863 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2522863 ']' 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2522863 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2522863 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2522863' 00:08:09.461 killing process with pid 2522863 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2522863 00:08:09.461 07:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2522863 00:08:09.721 00:08:09.721 real 0m2.474s 00:08:09.721 user 0m2.693s 00:08:09.721 sys 0m0.806s 00:08:09.721 07:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.721 07:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.721 ************************************ 00:08:09.721 END TEST locking_app_on_locked_coremask 00:08:09.721 ************************************ 00:08:09.721 07:13:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:09.721 07:13:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.721 07:13:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.721 07:13:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.721 ************************************ 00:08:09.721 START TEST locking_overlapped_coremask 00:08:09.721 ************************************ 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2523354 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2523354 /var/tmp/spdk.sock 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2523354 ']' 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.721 07:13:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.721 [2024-07-25 07:13:42.238721] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:09.721 [2024-07-25 07:13:42.238768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523354 ] 00:08:09.981 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.981 [2024-07-25 07:13:42.320575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.981 [2024-07-25 07:13:42.395482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.981 [2024-07-25 07:13:42.395499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.981 [2024-07-25 07:13:42.395502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2523576 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2523576 /var/tmp/spdk2.sock 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2523576 /var/tmp/spdk2.sock 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2523576 /var/tmp/spdk2.sock 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2523576 ']' 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.550 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.809 [2024-07-25 07:13:43.101328] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:10.809 [2024-07-25 07:13:43.101382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523576 ] 00:08:10.809 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.809 [2024-07-25 07:13:43.221430] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2523354 has claimed it. 00:08:10.809 [2024-07-25 07:13:43.221472] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.376 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2523576) - No such process 00:08:11.376 ERROR: process (pid: 2523576) is no longer running 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2523354 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2523354 ']' 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2523354 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523354 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523354' 00:08:11.376 killing process with pid 2523354 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2523354 00:08:11.376 07:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2523354 00:08:11.635 00:08:11.635 real 0m1.892s 00:08:11.635 user 0m5.294s 00:08:11.635 sys 0m0.470s 00:08:11.635 07:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.635 07:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.635 ************************************ 00:08:11.635 END TEST locking_overlapped_coremask 00:08:11.635 ************************************ 00:08:11.635 07:13:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:11.635 07:13:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.635 07:13:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.635 07:13:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:11.895 ************************************ 00:08:11.895 START TEST locking_overlapped_coremask_via_rpc 00:08:11.895 ************************************ 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2523669 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2523669 /var/tmp/spdk.sock 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2523669 ']' 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.895 07:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.895 [2024-07-25 07:13:44.221229] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:11.895 [2024-07-25 07:13:44.221278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523669 ] 00:08:11.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.895 [2024-07-25 07:13:44.305223] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:11.895 [2024-07-25 07:13:44.305252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.895 [2024-07-25 07:13:44.381354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.895 [2024-07-25 07:13:44.381450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.895 [2024-07-25 07:13:44.381452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2523919 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2523919 /var/tmp/spdk2.sock 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2523919 ']' 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.832 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.832 [2024-07-25 07:13:45.072699] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:12.832 [2024-07-25 07:13:45.072757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523919 ] 00:08:12.832 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.832 [2024-07-25 07:13:45.190356] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.832 [2024-07-25 07:13:45.190383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.832 [2024-07-25 07:13:45.338634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.832 [2024-07-25 07:13:45.338723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.832 [2024-07-25 07:13:45.338724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.400 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.400 [2024-07-25 07:13:45.885702] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2523669 has claimed it. 00:08:13.400 request: 00:08:13.400 { 00:08:13.400 "method": "framework_enable_cpumask_locks", 00:08:13.400 "req_id": 1 00:08:13.400 } 00:08:13.401 Got JSON-RPC error response 00:08:13.401 response: 00:08:13.401 { 00:08:13.401 "code": -32603, 00:08:13.401 "message": "Failed to claim CPU core: 2" 00:08:13.401 } 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2523669 /var/tmp/spdk.sock 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2523669 ']' 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.401 07:13:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2523919 /var/tmp/spdk2.sock 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2523919 ']' 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.660 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:13.919 00:08:13.919 real 0m2.120s 00:08:13.919 user 0m0.824s 00:08:13.919 sys 0m0.224s 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.919 07:13:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.919 ************************************ 00:08:13.919 END TEST locking_overlapped_coremask_via_rpc 00:08:13.919 ************************************ 00:08:13.919 07:13:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:13.919 07:13:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2523669 ]] 00:08:13.919 07:13:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2523669 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2523669 ']' 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2523669 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523669 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523669' 00:08:13.919 killing process with pid 2523669 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2523669 00:08:13.919 07:13:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2523669 00:08:14.178 07:13:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2523919 ]] 00:08:14.178 07:13:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2523919 00:08:14.178 07:13:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2523919 ']' 00:08:14.178 07:13:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2523919 00:08:14.178 07:13:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:14.178 07:13:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523919 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523919' 00:08:14.437 killing process with pid 2523919 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2523919 00:08:14.437 07:13:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2523919 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2523669 ]] 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2523669 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2523669 ']' 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2523669 00:08:14.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2523669) - No such process 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2523669 is not found' 00:08:14.696 Process with pid 2523669 is not found 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2523919 ]] 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2523919 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2523919 ']' 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2523919 00:08:14.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2523919) - No such process 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2523919 is not found' 00:08:14.696 Process with pid 2523919 is not found 00:08:14.696 07:13:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:14.696 00:08:14.696 real 0m19.127s 00:08:14.696 user 0m31.229s 00:08:14.696 sys 0m6.534s 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.696 07:13:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.696 ************************************ 00:08:14.696 END TEST cpu_locks 00:08:14.696 ************************************ 00:08:14.696 00:08:14.696 real 0m43.290s 00:08:14.696 user 1m18.604s 00:08:14.696 sys 0m10.903s 00:08:14.696 07:13:47 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.696 07:13:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:14.696 ************************************ 00:08:14.696 END TEST event 00:08:14.696 ************************************ 00:08:14.696 07:13:47 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:14.697 07:13:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.697 07:13:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.697 07:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:14.697 ************************************ 00:08:14.697 START TEST thread 00:08:14.697 ************************************ 00:08:14.697 07:13:47 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:14.956 * Looking for test storage... 00:08:14.956 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:14.956 07:13:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:14.956 07:13:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:14.956 07:13:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.956 07:13:47 thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.956 ************************************ 00:08:14.956 START TEST thread_poller_perf 00:08:14.956 ************************************ 00:08:14.956 07:13:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:14.956 [2024-07-25 07:13:47.338904] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:14.956 [2024-07-25 07:13:47.338951] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524317 ] 00:08:14.956 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.956 [2024-07-25 07:13:47.420738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.214 [2024-07-25 07:13:47.490245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.214 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:16.151 ====================================== 00:08:16.151 busy:2509661036 (cyc) 00:08:16.151 total_run_count: 439000 00:08:16.151 tsc_hz: 2500000000 (cyc) 00:08:16.151 ====================================== 00:08:16.151 poller_cost: 5716 (cyc), 2286 (nsec) 00:08:16.151 00:08:16.151 real 0m1.232s 00:08:16.151 user 0m1.133s 00:08:16.151 sys 0m0.096s 00:08:16.151 07:13:48 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.151 07:13:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.151 ************************************ 00:08:16.151 END TEST thread_poller_perf 00:08:16.151 ************************************ 00:08:16.151 07:13:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:16.151 07:13:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:16.151 07:13:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.151 07:13:48 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.151 ************************************ 00:08:16.151 START TEST thread_poller_perf 00:08:16.151 ************************************ 00:08:16.151 07:13:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:16.151 [2024-07-25 07:13:48.653365] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:16.151 [2024-07-25 07:13:48.653407] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524581 ] 00:08:16.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.411 [2024-07-25 07:13:48.728110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.411 [2024-07-25 07:13:48.796439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.411 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:17.348 ====================================== 00:08:17.348 busy:2501862756 (cyc) 00:08:17.348 total_run_count: 5662000 00:08:17.348 tsc_hz: 2500000000 (cyc) 00:08:17.348 ====================================== 00:08:17.348 poller_cost: 441 (cyc), 176 (nsec) 00:08:17.348 00:08:17.348 real 0m1.219s 00:08:17.348 user 0m1.128s 00:08:17.348 sys 0m0.087s 00:08:17.348 07:13:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.348 07:13:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.348 ************************************ 00:08:17.348 END TEST thread_poller_perf 00:08:17.348 ************************************ 00:08:17.607 07:13:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:17.607 00:08:17.607 real 0m2.713s 00:08:17.607 user 0m2.364s 00:08:17.607 sys 0m0.363s 00:08:17.607 07:13:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.607 07:13:49 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 ************************************ 00:08:17.607 END TEST thread 00:08:17.607 ************************************ 00:08:17.607 07:13:49 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:17.607 07:13:49 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.607 07:13:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.607 07:13:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.607 07:13:49 -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 ************************************ 00:08:17.607 START TEST app_cmdline 00:08:17.607 ************************************ 00:08:17.607 07:13:49 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.607 * Looking for test storage... 00:08:17.607 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:17.607 07:13:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:17.607 07:13:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2524905 00:08:17.607 07:13:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:17.607 07:13:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2524905 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2524905 ']' 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.607 07:13:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:17.866 [2024-07-25 07:13:50.137261] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:17.867 [2024-07-25 07:13:50.137316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524905 ] 00:08:17.867 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.867 [2024-07-25 07:13:50.219559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.867 [2024-07-25 07:13:50.291370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.435 07:13:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.435 07:13:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:18.435 07:13:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:18.695 { 00:08:18.695 "version": "SPDK v24.09-pre git sha1 e5ef9abc9", 00:08:18.695 "fields": { 00:08:18.695 "major": 24, 00:08:18.695 "minor": 9, 00:08:18.695 "patch": 0, 00:08:18.695 "suffix": "-pre", 00:08:18.695 "commit": "e5ef9abc9" 00:08:18.695 } 00:08:18.695 } 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.695 07:13:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.695 07:13:51 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.956 request: 00:08:18.956 { 00:08:18.956 "method": "env_dpdk_get_mem_stats", 00:08:18.956 "req_id": 1 00:08:18.956 } 00:08:18.956 Got JSON-RPC error response 00:08:18.956 response: 00:08:18.956 { 00:08:18.956 "code": -32601, 00:08:18.956 "message": "Method not found" 00:08:18.956 } 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.956 07:13:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2524905 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2524905 ']' 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2524905 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2524905 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2524905' 00:08:18.956 killing process with pid 2524905 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 2524905 00:08:18.956 07:13:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 2524905 00:08:19.216 00:08:19.216 real 0m1.705s 00:08:19.216 user 0m1.954s 00:08:19.216 sys 0m0.518s 00:08:19.216 07:13:51 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.216 07:13:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.216 ************************************ 00:08:19.216 END TEST app_cmdline 00:08:19.216 ************************************ 00:08:19.216 07:13:51 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:19.216 07:13:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.216 07:13:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.216 07:13:51 -- common/autotest_common.sh@10 -- # set +x 00:08:19.476 ************************************ 00:08:19.476 START TEST version 00:08:19.476 ************************************ 00:08:19.476 07:13:51 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:19.476 * Looking for test storage... 00:08:19.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:19.476 07:13:51 version -- app/version.sh@17 -- # get_header_version major 00:08:19.476 07:13:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # cut -f2 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.476 07:13:51 version -- app/version.sh@17 -- # major=24 00:08:19.476 07:13:51 version -- app/version.sh@18 -- # get_header_version minor 00:08:19.476 07:13:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # cut -f2 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.476 07:13:51 version -- app/version.sh@18 -- # minor=9 00:08:19.476 07:13:51 version -- app/version.sh@19 -- # get_header_version patch 00:08:19.476 07:13:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # cut -f2 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.476 07:13:51 version -- app/version.sh@19 -- # patch=0 00:08:19.476 07:13:51 version -- app/version.sh@20 -- # get_header_version suffix 00:08:19.476 07:13:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # cut -f2 00:08:19.476 07:13:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.476 07:13:51 version -- app/version.sh@20 -- # suffix=-pre 00:08:19.476 07:13:51 version -- app/version.sh@22 -- # version=24.9 00:08:19.476 07:13:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:19.476 07:13:51 version -- app/version.sh@28 -- # version=24.9rc0 00:08:19.476 07:13:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:19.476 07:13:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:19.476 07:13:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:19.476 07:13:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:19.476 00:08:19.476 real 0m0.177s 00:08:19.476 user 0m0.093s 00:08:19.476 sys 0m0.132s 00:08:19.476 07:13:51 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.476 07:13:51 version -- common/autotest_common.sh@10 -- # set +x 00:08:19.476 ************************************ 00:08:19.476 END TEST version 00:08:19.476 ************************************ 00:08:19.476 07:13:51 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:19.476 07:13:51 -- spdk/autotest.sh@202 -- # uname -s 00:08:19.476 07:13:51 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:19.476 07:13:51 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:19.476 07:13:51 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:19.476 07:13:51 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:19.476 07:13:51 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:19.476 07:13:51 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:19.476 07:13:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.476 07:13:51 -- common/autotest_common.sh@10 -- # set +x 00:08:19.736 07:13:52 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:19.736 07:13:52 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:19.736 07:13:52 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:19.736 07:13:52 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:19.736 07:13:52 -- spdk/autotest.sh@287 -- # '[' rdma = rdma ']' 00:08:19.736 07:13:52 -- spdk/autotest.sh@288 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:19.736 07:13:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.736 07:13:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.736 07:13:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.736 ************************************ 00:08:19.736 START TEST nvmf_rdma 00:08:19.736 ************************************ 00:08:19.736 07:13:52 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:19.736 * Looking for test storage... 00:08:19.736 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:19.736 07:13:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:19.736 07:13:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:19.736 07:13:52 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:19.736 07:13:52 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.736 07:13:52 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.736 07:13:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:19.736 ************************************ 00:08:19.736 START TEST nvmf_target_core 00:08:19.736 ************************************ 00:08:19.736 07:13:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:19.997 * Looking for test storage... 00:08:19.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.997 ************************************ 00:08:19.997 START TEST nvmf_abort 00:08:19.997 ************************************ 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:19.997 * Looking for test storage... 00:08:19.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.997 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.998 07:13:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:28.211 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:28.211 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:28.211 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:28.211 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.211 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:28.472 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.472 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:28.472 altname enp217s0f0np0 00:08:28.472 altname ens818f0np0 00:08:28.472 inet 192.168.100.8/24 scope global mlx_0_0 00:08:28.472 valid_lft forever preferred_lft forever 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:28.472 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.472 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:28.472 altname enp217s0f1np1 00:08:28.472 altname ens818f1np1 00:08:28.472 inet 192.168.100.9/24 scope global mlx_0_1 00:08:28.472 valid_lft forever preferred_lft forever 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:28.472 192.168.100.9' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:28.472 192.168.100.9' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:28.472 192.168.100.9' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:28.472 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2529491 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2529491 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2529491 ']' 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.473 07:14:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.473 [2024-07-25 07:14:00.948260] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:28.473 [2024-07-25 07:14:00.948314] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.473 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.732 [2024-07-25 07:14:01.032287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.732 [2024-07-25 07:14:01.102390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.732 [2024-07-25 07:14:01.102429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.732 [2024-07-25 07:14:01.102439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.732 [2024-07-25 07:14:01.102447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.732 [2024-07-25 07:14:01.102453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.732 [2024-07-25 07:14:01.102556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.732 [2024-07-25 07:14:01.102657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.732 [2024-07-25 07:14:01.102660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.300 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 [2024-07-25 07:14:01.839224] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e6b500/0x1e6f9f0) succeed. 00:08:29.560 [2024-07-25 07:14:01.856491] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e6caa0/0x1eb1080) succeed. 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 Malloc0 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 Delay0 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.560 07:14:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 [2024-07-25 07:14:02.024140] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.560 07:14:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:29.560 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.820 [2024-07-25 07:14:02.129150] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:31.726 Initializing NVMe Controllers 00:08:31.726 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:31.726 controller IO queue size 128 less than required 00:08:31.726 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:31.726 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:31.726 Initialization complete. Launching workers. 00:08:31.726 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51545 00:08:31.726 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51606, failed to submit 62 00:08:31.726 success 51546, unsuccess 60, failed 0 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.726 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:31.986 rmmod nvme_rdma 00:08:31.986 rmmod nvme_fabrics 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2529491 ']' 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2529491 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2529491 ']' 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2529491 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2529491 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2529491' 00:08:31.986 killing process with pid 2529491 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2529491 00:08:31.986 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2529491 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:32.246 00:08:32.246 real 0m12.239s 00:08:32.246 user 0m14.895s 00:08:32.246 sys 0m6.942s 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.246 ************************************ 00:08:32.246 END TEST nvmf_abort 00:08:32.246 ************************************ 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.246 ************************************ 00:08:32.246 START TEST nvmf_ns_hotplug_stress 00:08:32.246 ************************************ 00:08:32.246 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:32.506 * Looking for test storage... 00:08:32.506 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.506 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.507 07:14:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:40.635 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:40.635 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:40.635 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:40.635 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:40.635 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:40.636 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.636 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:40.636 altname enp217s0f0np0 00:08:40.636 altname ens818f0np0 00:08:40.636 inet 192.168.100.8/24 scope global mlx_0_0 00:08:40.636 valid_lft forever preferred_lft forever 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:40.636 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.636 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:40.636 altname enp217s0f1np1 00:08:40.636 altname ens818f1np1 00:08:40.636 inet 192.168.100.9/24 scope global mlx_0_1 00:08:40.636 valid_lft forever preferred_lft forever 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:40.636 192.168.100.9' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:40.636 192.168.100.9' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:40.636 192.168.100.9' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2534214 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2534214 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2534214 ']' 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.636 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.637 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.637 07:14:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.637 [2024-07-25 07:14:13.014172] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:40.637 [2024-07-25 07:14:13.014232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.637 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.637 [2024-07-25 07:14:13.098056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.896 [2024-07-25 07:14:13.167903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.896 [2024-07-25 07:14:13.167942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.896 [2024-07-25 07:14:13.167952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.896 [2024-07-25 07:14:13.167961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.896 [2024-07-25 07:14:13.167968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.896 [2024-07-25 07:14:13.168070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.896 [2024-07-25 07:14:13.168152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.896 [2024-07-25 07:14:13.168154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:41.464 07:14:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:41.723 [2024-07-25 07:14:14.042531] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d1500/0x13d59f0) succeed. 00:08:41.723 [2024-07-25 07:14:14.051790] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d2aa0/0x1417080) succeed. 00:08:41.723 07:14:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.982 07:14:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:42.241 [2024-07-25 07:14:14.518826] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:42.241 07:14:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:42.241 07:14:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:42.499 Malloc0 00:08:42.499 07:14:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:42.758 Delay0 00:08:42.758 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.758 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:43.017 NULL1 00:08:43.017 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:43.276 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2534633 00:08:43.276 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:43.276 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:43.276 07:14:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.276 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.659 Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 07:14:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.660 07:14:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:44.660 07:14:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:44.660 true 00:08:44.660 07:14:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:44.660 07:14:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 07:14:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.856 07:14:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:45.856 07:14:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:45.856 true 00:08:45.856 07:14:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:45.856 07:14:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 07:14:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.052 07:14:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:47.052 07:14:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:47.052 true 00:08:47.052 07:14:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:47.052 07:14:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 07:14:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.321 07:14:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:48.321 07:14:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:48.321 true 00:08:48.321 07:14:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:48.321 07:14:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 07:14:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.257 07:14:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:49.257 07:14:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:49.516 true 00:08:49.516 07:14:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:49.516 07:14:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.453 07:14:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.453 07:14:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:50.453 07:14:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:50.712 true 00:08:50.712 07:14:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:50.712 07:14:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 07:14:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.648 07:14:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:51.648 07:14:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:51.906 true 00:08:51.906 07:14:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:51.906 07:14:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 07:14:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.841 07:14:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:52.841 07:14:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:53.099 true 00:08:53.099 07:14:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:53.099 07:14:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 07:14:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.035 07:14:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:54.035 07:14:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:54.293 true 00:08:54.293 07:14:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:54.293 07:14:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.228 07:14:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.229 07:14:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:55.229 07:14:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:55.487 true 00:08:55.487 07:14:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:55.487 07:14:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 07:14:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.422 07:14:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:56.422 07:14:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:56.681 true 00:08:56.681 07:14:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:56.681 07:14:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.618 07:14:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.618 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:57.618 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:57.877 true 00:08:57.877 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:57.877 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.136 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.136 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:58.136 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:58.395 true 00:08:58.395 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:08:58.395 07:14:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 07:14:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.773 07:14:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:59.773 07:14:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:59.773 true 00:09:00.032 07:14:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:00.032 07:14:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 07:14:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.861 07:14:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:00.861 07:14:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:01.119 true 00:09:01.119 07:14:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:01.119 07:14:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 07:14:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.057 07:14:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:02.057 07:14:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:02.316 true 00:09:02.316 07:14:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:02.316 07:14:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 07:14:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.255 07:14:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:03.255 07:14:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:03.515 true 00:09:03.515 07:14:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:03.515 07:14:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 07:14:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.528 07:14:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:04.528 07:14:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:04.787 true 00:09:04.787 07:14:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:04.787 07:14:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.726 07:14:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.726 07:14:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:05.726 07:14:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:05.726 true 00:09:05.985 07:14:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:05.985 07:14:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 07:14:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.924 07:14:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:06.924 07:14:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:07.183 true 00:09:07.183 07:14:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:07.183 07:14:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 07:14:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.122 07:14:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:08.122 07:14:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:08.381 true 00:09:08.381 07:14:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:08.381 07:14:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 07:14:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.317 07:14:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:09.317 07:14:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:09.576 true 00:09:09.576 07:14:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:09.576 07:14:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 07:14:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.514 07:14:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:10.514 07:14:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:10.774 true 00:09:10.774 07:14:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:10.774 07:14:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 07:14:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.710 07:14:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:11.710 07:14:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:11.969 true 00:09:11.969 07:14:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:11.969 07:14:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 07:14:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.906 07:14:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:12.906 07:14:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:13.165 true 00:09:13.165 07:14:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:13.165 07:14:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.103 07:14:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.103 07:14:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:14.103 07:14:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:14.362 true 00:09:14.362 07:14:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:14.362 07:14:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.622 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.881 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:14.881 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:14.881 true 00:09:14.881 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:14.881 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.140 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.399 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:15.399 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:15.399 true 00:09:15.399 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:15.399 07:14:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.659 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.919 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:15.919 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:15.919 Initializing NVMe Controllers 00:09:15.919 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.919 Controller IO queue size 128, less than required. 00:09:15.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.919 Controller IO queue size 128, less than required. 00:09:15.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.919 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:15.919 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:15.919 Initialization complete. Launching workers. 00:09:15.919 ======================================================== 00:09:15.919 Latency(us) 00:09:15.919 Device Information : IOPS MiB/s Average min max 00:09:15.919 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5594.97 2.73 20492.99 798.25 1133947.18 00:09:15.919 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34334.93 16.77 3727.85 1471.20 285818.92 00:09:15.919 ======================================================== 00:09:15.919 Total : 39929.90 19.50 6076.98 798.25 1133947.18 00:09:15.919 00:09:15.919 true 00:09:16.178 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2534633 00:09:16.178 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2534633) - No such process 00:09:16.178 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2534633 00:09:16.178 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.178 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.437 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:16.437 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:16.437 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:16.437 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.437 07:14:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:16.696 null0 00:09:16.696 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.696 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.697 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:16.697 null1 00:09:16.697 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.697 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.697 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:16.955 null2 00:09:16.955 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.955 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.955 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:17.215 null3 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:17.215 null4 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.215 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:17.474 null5 00:09:17.474 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.474 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.474 07:14:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:17.733 null6 00:09:17.733 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.734 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.734 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:17.994 null7 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.994 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2540724 2540727 2540728 2540730 2540732 2540734 2540736 2540738 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.995 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.255 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.515 07:14:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.774 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.775 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.034 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.035 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.326 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.586 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.586 07:14:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.586 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.846 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.105 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.364 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.624 07:14:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.624 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.884 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.143 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.403 07:14:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:21.663 rmmod nvme_rdma 00:09:21.663 rmmod nvme_fabrics 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2534214 ']' 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2534214 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2534214 ']' 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2534214 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.663 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534214 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534214' 00:09:21.922 killing process with pid 2534214 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2534214 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2534214 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:21.922 00:09:21.922 real 0m49.748s 00:09:21.922 user 3m17.004s 00:09:21.922 sys 0m15.776s 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.922 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.922 ************************************ 00:09:21.922 END TEST nvmf_ns_hotplug_stress 00:09:21.922 ************************************ 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.182 ************************************ 00:09:22.182 START TEST nvmf_delete_subsystem 00:09:22.182 ************************************ 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:22.182 * Looking for test storage... 00:09:22.182 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.182 07:14:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:30.304 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:30.305 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:30.305 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:30.305 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:30.305 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.305 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:30.565 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.565 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:30.565 altname enp217s0f0np0 00:09:30.565 altname ens818f0np0 00:09:30.565 inet 192.168.100.8/24 scope global mlx_0_0 00:09:30.565 valid_lft forever preferred_lft forever 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:30.565 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.565 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:30.565 altname enp217s0f1np1 00:09:30.565 altname ens818f1np1 00:09:30.565 inet 192.168.100.9/24 scope global mlx_0_1 00:09:30.565 valid_lft forever preferred_lft forever 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:30.565 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:30.566 192.168.100.9' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:30.566 192.168.100.9' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:30.566 192.168.100.9' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2545740 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2545740 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2545740 ']' 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.566 07:15:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.566 [2024-07-25 07:15:03.045663] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:30.566 [2024-07-25 07:15:03.045711] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.566 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.825 [2024-07-25 07:15:03.129779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.825 [2024-07-25 07:15:03.202108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.825 [2024-07-25 07:15:03.202146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.825 [2024-07-25 07:15:03.202155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.825 [2024-07-25 07:15:03.202164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.825 [2024-07-25 07:15:03.202171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.825 [2024-07-25 07:15:03.202217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.825 [2024-07-25 07:15:03.202220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.393 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.393 [2024-07-25 07:15:03.915149] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6d2840/0x6d6d30) succeed. 00:09:31.653 [2024-07-25 07:15:03.924142] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6d3d40/0x7183c0) succeed. 00:09:31.653 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.653 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.653 07:15:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.653 [2024-07-25 07:15:04.013418] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.653 NULL1 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.653 Delay0 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2546009 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:31.653 07:15:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:31.654 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.654 [2024-07-25 07:15:04.127432] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:33.560 07:15:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.560 07:15:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.560 07:15:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 NVMe io qpair process completion error 00:09:34.938 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.938 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:34.939 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2546009 00:09:34.939 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:35.197 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:35.197 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2546009 00:09:35.198 07:15:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:35.766 Read completed with error (sct=0, sc=8) 00:09:35.766 starting I/O failed: -6 00:09:35.766 Read completed with error (sct=0, sc=8) 00:09:35.766 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Read completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.767 starting I/O failed: -6 00:09:35.767 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 starting I/O failed: -6 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Read completed with error (sct=0, sc=8) 00:09:35.768 Write completed with error (sct=0, sc=8) 00:09:35.768 Initializing NVMe Controllers 00:09:35.768 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.768 Controller IO queue size 128, less than required. 00:09:35.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:35.768 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:35.768 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:35.768 Initialization complete. Launching workers. 00:09:35.768 ======================================================== 00:09:35.768 Latency(us) 00:09:35.768 Device Information : IOPS MiB/s Average min max 00:09:35.768 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.32 0.04 1596172.63 1000072.04 2984678.41 00:09:35.768 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.32 0.04 1597569.32 1000807.02 2985855.29 00:09:35.768 ======================================================== 00:09:35.768 Total : 160.64 0.08 1596870.97 1000072.04 2985855.29 00:09:35.768 00:09:35.768 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:35.768 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2546009 00:09:35.768 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:35.768 [2024-07-25 07:15:08.225405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:35.768 [2024-07-25 07:15:08.225449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:35.768 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2546009 00:09:36.338 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2546009) - No such process 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2546009 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2546009 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2546009 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.338 [2024-07-25 07:15:08.744734] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2547120 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:36.338 07:15:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.338 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.338 [2024-07-25 07:15:08.831363] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:36.907 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.907 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:36.907 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.475 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.475 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:37.475 07:15:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.043 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.043 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:38.043 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.301 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.302 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:38.302 07:15:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.869 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.869 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:38.869 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.437 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.437 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:39.437 07:15:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.038 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.038 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:40.038 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.298 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.298 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:40.298 07:15:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.866 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.866 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:40.866 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:41.434 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:41.434 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:41.434 07:15:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.002 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.002 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:42.002 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.570 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.570 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:42.570 07:15:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.829 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.829 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:42.829 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.397 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.397 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:43.397 07:15:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.656 Initializing NVMe Controllers 00:09:43.656 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.656 Controller IO queue size 128, less than required. 00:09:43.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:43.656 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:43.656 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:43.656 Initialization complete. Launching workers. 00:09:43.656 ======================================================== 00:09:43.656 Latency(us) 00:09:43.656 Device Information : IOPS MiB/s Average min max 00:09:43.656 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001319.75 1000062.33 1003718.63 00:09:43.656 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002518.07 1000073.18 1006335.28 00:09:43.656 ======================================================== 00:09:43.656 Total : 256.00 0.12 1001918.91 1000062.33 1006335.28 00:09:43.656 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2547120 00:09:43.916 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2547120) - No such process 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2547120 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:43.916 rmmod nvme_rdma 00:09:43.916 rmmod nvme_fabrics 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2545740 ']' 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2545740 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2545740 ']' 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2545740 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.916 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2545740 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2545740' 00:09:44.175 killing process with pid 2545740 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2545740 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2545740 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:44.175 00:09:44.175 real 0m22.166s 00:09:44.175 user 0m50.509s 00:09:44.175 sys 0m7.606s 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.175 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.175 ************************************ 00:09:44.175 END TEST nvmf_delete_subsystem 00:09:44.175 ************************************ 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.435 ************************************ 00:09:44.435 START TEST nvmf_host_management 00:09:44.435 ************************************ 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:44.435 * Looking for test storage... 00:09:44.435 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.435 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.436 07:15:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:52.555 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:52.555 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:52.555 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:52.555 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:52.555 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:52.556 07:15:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:52.556 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:52.556 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:52.556 altname enp217s0f0np0 00:09:52.556 altname ens818f0np0 00:09:52.556 inet 192.168.100.8/24 scope global mlx_0_0 00:09:52.556 valid_lft forever preferred_lft forever 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:52.556 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:52.556 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:52.556 altname enp217s0f1np1 00:09:52.556 altname ens818f1np1 00:09:52.556 inet 192.168.100.9/24 scope global mlx_0_1 00:09:52.556 valid_lft forever preferred_lft forever 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:52.556 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:52.816 192.168.100.9' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:52.816 192.168.100.9' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:52.816 192.168.100.9' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2552704 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2552704 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2552704 ']' 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.816 07:15:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.816 [2024-07-25 07:15:25.195575] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:52.816 [2024-07-25 07:15:25.195622] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.816 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.816 [2024-07-25 07:15:25.278793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.076 [2024-07-25 07:15:25.354798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.076 [2024-07-25 07:15:25.354832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.076 [2024-07-25 07:15:25.354841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.076 [2024-07-25 07:15:25.354849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.076 [2024-07-25 07:15:25.354856] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.076 [2024-07-25 07:15:25.354958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.076 [2024-07-25 07:15:25.355042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.076 [2024-07-25 07:15:25.355154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.076 [2024-07-25 07:15:25.355155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:53.644 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.645 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.645 [2024-07-25 07:15:26.081497] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23640d0/0x23685c0) succeed. 00:09:53.645 [2024-07-25 07:15:26.090947] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2365710/0x23a9c50) succeed. 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.905 Malloc0 00:09:53.905 [2024-07-25 07:15:26.272506] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2552826 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2552826 /var/tmp/bdevperf.sock 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2552826 ']' 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:53.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.905 { 00:09:53.905 "params": { 00:09:53.905 "name": "Nvme$subsystem", 00:09:53.905 "trtype": "$TEST_TRANSPORT", 00:09:53.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.905 "adrfam": "ipv4", 00:09:53.905 "trsvcid": "$NVMF_PORT", 00:09:53.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.905 "hdgst": ${hdgst:-false}, 00:09:53.905 "ddgst": ${ddgst:-false} 00:09:53.905 }, 00:09:53.905 "method": "bdev_nvme_attach_controller" 00:09:53.905 } 00:09:53.905 EOF 00:09:53.905 )") 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:53.905 07:15:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.905 "params": { 00:09:53.905 "name": "Nvme0", 00:09:53.905 "trtype": "rdma", 00:09:53.905 "traddr": "192.168.100.8", 00:09:53.905 "adrfam": "ipv4", 00:09:53.905 "trsvcid": "4420", 00:09:53.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:53.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:53.905 "hdgst": false, 00:09:53.905 "ddgst": false 00:09:53.905 }, 00:09:53.905 "method": "bdev_nvme_attach_controller" 00:09:53.905 }' 00:09:53.905 [2024-07-25 07:15:26.375715] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:53.905 [2024-07-25 07:15:26.375767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552826 ] 00:09:53.905 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.165 [2024-07-25 07:15:26.462144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.165 [2024-07-25 07:15:26.532680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.424 Running I/O for 10 seconds... 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:54.683 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:54.942 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1515 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1515 -ge 100 ']' 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 07:15:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:55.880 [2024-07-25 07:15:28.265312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:09:55.880 [2024-07-25 07:15:28.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.880 [2024-07-25 07:15:28.265367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:09:55.880 [2024-07-25 07:15:28.265381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.880 [2024-07-25 07:15:28.265392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:09:55.880 [2024-07-25 07:15:28.265402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.880 [2024-07-25 07:15:28.265413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:09:55.880 [2024-07-25 07:15:28.265422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.880 [2024-07-25 07:15:28.265434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:09:55.880 [2024-07-25 07:15:28.265443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:09:55.881 [2024-07-25 07:15:28.265637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:09:55.881 [2024-07-25 07:15:28.265934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.265955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.265975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.265995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:09:55.881 [2024-07-25 07:15:28.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.881 [2024-07-25 07:15:28.266192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d818000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7f7000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7d6000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7b5000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d794000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d773000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d752000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d731000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.266669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d710000 len:0x10000 key:0x182400 00:09:55.882 [2024-07-25 07:15:28.266679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b2321000 sqhd:52b0 p:0 m:0 dnr:0 00:09:55.882 [2024-07-25 07:15:28.268596] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:09:55.882 [2024-07-25 07:15:28.269483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:55.882 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:55.882 00:09:55.882 Latency(us) 00:09:55.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:55.882 Job: Nvme0n1 ended in about 1.55 seconds with error 00:09:55.882 Verification LBA range: start 0x0 length 0x400 00:09:55.882 Nvme0n1 : 1.55 1056.87 66.05 41.24 0.00 57742.90 2188.90 1020054.73 00:09:55.882 =================================================================================================================== 00:09:55.882 Total : 1056.87 66.05 41.24 0.00 57742.90 2188.90 1020054.73 00:09:55.882 [2024-07-25 07:15:28.271028] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2552826 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.882 { 00:09:55.882 "params": { 00:09:55.882 "name": "Nvme$subsystem", 00:09:55.882 "trtype": "$TEST_TRANSPORT", 00:09:55.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.882 "adrfam": "ipv4", 00:09:55.882 "trsvcid": "$NVMF_PORT", 00:09:55.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.882 "hdgst": ${hdgst:-false}, 00:09:55.882 "ddgst": ${ddgst:-false} 00:09:55.882 }, 00:09:55.882 "method": "bdev_nvme_attach_controller" 00:09:55.882 } 00:09:55.882 EOF 00:09:55.882 )") 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:55.882 07:15:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.882 "params": { 00:09:55.882 "name": "Nvme0", 00:09:55.882 "trtype": "rdma", 00:09:55.882 "traddr": "192.168.100.8", 00:09:55.882 "adrfam": "ipv4", 00:09:55.883 "trsvcid": "4420", 00:09:55.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:55.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:55.883 "hdgst": false, 00:09:55.883 "ddgst": false 00:09:55.883 }, 00:09:55.883 "method": "bdev_nvme_attach_controller" 00:09:55.883 }' 00:09:55.883 [2024-07-25 07:15:28.335584] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:55.883 [2024-07-25 07:15:28.335641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553311 ] 00:09:55.883 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.142 [2024-07-25 07:15:28.419429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.142 [2024-07-25 07:15:28.489357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.142 Running I/O for 1 seconds... 00:09:57.522 00:09:57.522 Latency(us) 00:09:57.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:57.522 Verification LBA range: start 0x0 length 0x400 00:09:57.522 Nvme0n1 : 1.00 3160.54 197.53 0.00 0.00 19835.77 622.59 32296.14 00:09:57.522 =================================================================================================================== 00:09:57.522 Total : 3160.54 197.53 0.00 0.00 19835.77 622.59 32296.14 00:09:57.522 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2552826 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:57.522 rmmod nvme_rdma 00:09:57.522 rmmod nvme_fabrics 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2552704 ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2552704 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2552704 ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2552704 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552704 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552704' 00:09:57.522 killing process with pid 2552704 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2552704 00:09:57.522 07:15:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2552704 00:09:57.782 [2024-07-25 07:15:30.253929] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:57.782 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.782 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:57.782 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:57.782 00:09:57.782 real 0m13.525s 00:09:57.782 user 0m25.290s 00:09:57.782 sys 0m7.352s 00:09:57.782 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.782 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.782 ************************************ 00:09:57.782 END TEST nvmf_host_management 00:09:57.782 ************************************ 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 ************************************ 00:09:58.041 START TEST nvmf_lvol 00:09:58.041 ************************************ 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:58.041 * Looking for test storage... 00:09:58.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.041 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.042 07:15:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:06.235 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:06.235 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:06.235 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:06.235 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:06.235 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:06.236 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.236 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:06.236 altname enp217s0f0np0 00:10:06.236 altname ens818f0np0 00:10:06.236 inet 192.168.100.8/24 scope global mlx_0_0 00:10:06.236 valid_lft forever preferred_lft forever 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:06.236 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.236 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:06.236 altname enp217s0f1np1 00:10:06.236 altname ens818f1np1 00:10:06.236 inet 192.168.100.9/24 scope global mlx_0_1 00:10:06.236 valid_lft forever preferred_lft forever 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:06.236 192.168.100.9' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:06.236 192.168.100.9' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:06.236 192.168.100.9' 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:10:06.236 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:06.237 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:06.237 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:06.237 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:06.237 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:06.237 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2557563 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2557563 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2557563 ']' 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.496 07:15:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 [2024-07-25 07:15:38.820012] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:06.496 [2024-07-25 07:15:38.820065] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.496 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.496 [2024-07-25 07:15:38.902232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.496 [2024-07-25 07:15:38.977104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.496 [2024-07-25 07:15:38.977140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.496 [2024-07-25 07:15:38.977149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.496 [2024-07-25 07:15:38.977158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.496 [2024-07-25 07:15:38.977165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.496 [2024-07-25 07:15:38.977211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.496 [2024-07-25 07:15:38.977305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.496 [2024-07-25 07:15:38.977307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.433 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:07.433 [2024-07-25 07:15:39.859078] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfda200/0xfde6f0) succeed. 00:10:07.433 [2024-07-25 07:15:39.867953] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfdb7a0/0x101fd80) succeed. 00:10:07.691 07:15:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.691 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:07.691 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.950 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:07.950 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:08.208 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:08.208 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=91d3a944-8a99-486e-a948-515a65d076b5 00:10:08.209 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91d3a944-8a99-486e-a948-515a65d076b5 lvol 20 00:10:08.467 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5b75982d-f813-4ee4-9316-f02a148be445 00:10:08.467 07:15:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:08.726 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b75982d-f813-4ee4-9316-f02a148be445 00:10:08.985 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:08.985 [2024-07-25 07:15:41.436077] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:08.985 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:09.244 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2558093 00:10:09.244 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:09.244 07:15:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:09.244 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.182 07:15:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5b75982d-f813-4ee4-9316-f02a148be445 MY_SNAPSHOT 00:10:10.443 07:15:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=76ef7060-05ad-4794-b344-dfe4aef901ec 00:10:10.443 07:15:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5b75982d-f813-4ee4-9316-f02a148be445 30 00:10:10.702 07:15:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 76ef7060-05ad-4794-b344-dfe4aef901ec MY_CLONE 00:10:10.702 07:15:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=64b72fb3-983c-403a-8fb7-adf1d06213de 00:10:10.702 07:15:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 64b72fb3-983c-403a-8fb7-adf1d06213de 00:10:10.961 07:15:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2558093 00:10:20.944 Initializing NVMe Controllers 00:10:20.944 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:20.944 Controller IO queue size 128, less than required. 00:10:20.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:20.944 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:20.944 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:20.944 Initialization complete. Launching workers. 00:10:20.944 ======================================================== 00:10:20.944 Latency(us) 00:10:20.944 Device Information : IOPS MiB/s Average min max 00:10:20.944 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17043.70 66.58 7511.98 2000.36 46345.80 00:10:20.944 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16998.10 66.40 7532.28 3309.50 49372.86 00:10:20.944 ======================================================== 00:10:20.944 Total : 34041.79 132.98 7522.12 2000.36 49372.86 00:10:20.944 00:10:20.944 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:20.944 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b75982d-f813-4ee4-9316-f02a148be445 00:10:20.944 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91d3a944-8a99-486e-a948-515a65d076b5 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:21.203 rmmod nvme_rdma 00:10:21.203 rmmod nvme_fabrics 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2557563 ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2557563 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2557563 ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2557563 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557563 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557563' 00:10:21.203 killing process with pid 2557563 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2557563 00:10:21.203 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2557563 00:10:21.463 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.463 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:21.463 00:10:21.463 real 0m23.627s 00:10:21.463 user 1m11.792s 00:10:21.463 sys 0m7.567s 00:10:21.463 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.463 07:15:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.463 ************************************ 00:10:21.463 END TEST nvmf_lvol 00:10:21.463 ************************************ 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.723 ************************************ 00:10:21.723 START TEST nvmf_lvs_grow 00:10:21.723 ************************************ 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:21.723 * Looking for test storage... 00:10:21.723 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:21.723 07:15:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.849 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:29.850 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:29.850 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:29.850 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:29.850 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:29.850 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.850 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:29.850 altname enp217s0f0np0 00:10:29.850 altname ens818f0np0 00:10:29.850 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.850 valid_lft forever preferred_lft forever 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:29.850 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.850 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:29.850 altname enp217s0f1np1 00:10:29.850 altname ens818f1np1 00:10:29.850 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.850 valid_lft forever preferred_lft forever 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:29.850 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.851 192.168.100.9' 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:29.851 192.168.100.9' 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:10:29.851 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:30.111 192.168.100.9' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2564400 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2564400 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2564400 ']' 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.111 07:16:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.111 [2024-07-25 07:16:02.468526] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:30.111 [2024-07-25 07:16:02.468574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.111 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.111 [2024-07-25 07:16:02.552680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.111 [2024-07-25 07:16:02.625854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.111 [2024-07-25 07:16:02.625892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.111 [2024-07-25 07:16:02.625901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.111 [2024-07-25 07:16:02.625909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.111 [2024-07-25 07:16:02.625916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.111 [2024-07-25 07:16:02.625936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:31.054 [2024-07-25 07:16:03.477820] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11acb90/0x11b1080) succeed. 00:10:31.054 [2024-07-25 07:16:03.486656] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11ae090/0x11f2710) succeed. 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.054 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.317 ************************************ 00:10:31.317 START TEST lvs_grow_clean 00:10:31.317 ************************************ 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:31.317 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:31.617 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:31.617 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:31.617 07:16:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 lvol 150 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.905 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:32.164 [2024-07-25 07:16:04.464817] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:32.164 [2024-07-25 07:16:04.464863] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:32.164 true 00:10:32.164 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:32.164 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.164 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.164 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.423 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 00:10:32.683 07:16:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:32.683 [2024-07-25 07:16:05.118948] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:32.683 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:32.942 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:32.942 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2564902 00:10:32.942 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.942 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2564902 /var/tmp/bdevperf.sock 00:10:32.942 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2564902 ']' 00:10:32.943 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:32.943 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.943 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:32.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:32.943 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.943 07:16:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:32.943 [2024-07-25 07:16:05.323332] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:32.943 [2024-07-25 07:16:05.323382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564902 ] 00:10:32.943 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.943 [2024-07-25 07:16:05.402099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.202 [2024-07-25 07:16:05.475910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.771 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.771 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:33.771 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:34.030 Nvme0n1 00:10:34.030 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:34.030 [ 00:10:34.030 { 00:10:34.030 "name": "Nvme0n1", 00:10:34.030 "aliases": [ 00:10:34.030 "ab60f305-8ada-4dd2-95fe-eea4bb8f99f0" 00:10:34.030 ], 00:10:34.030 "product_name": "NVMe disk", 00:10:34.030 "block_size": 4096, 00:10:34.030 "num_blocks": 38912, 00:10:34.030 "uuid": "ab60f305-8ada-4dd2-95fe-eea4bb8f99f0", 00:10:34.030 "assigned_rate_limits": { 00:10:34.030 "rw_ios_per_sec": 0, 00:10:34.030 "rw_mbytes_per_sec": 0, 00:10:34.030 "r_mbytes_per_sec": 0, 00:10:34.030 "w_mbytes_per_sec": 0 00:10:34.030 }, 00:10:34.030 "claimed": false, 00:10:34.030 "zoned": false, 00:10:34.030 "supported_io_types": { 00:10:34.030 "read": true, 00:10:34.030 "write": true, 00:10:34.030 "unmap": true, 00:10:34.030 "flush": true, 00:10:34.030 "reset": true, 00:10:34.030 "nvme_admin": true, 00:10:34.030 "nvme_io": true, 00:10:34.030 "nvme_io_md": false, 00:10:34.030 "write_zeroes": true, 00:10:34.030 "zcopy": false, 00:10:34.030 "get_zone_info": false, 00:10:34.030 "zone_management": false, 00:10:34.030 "zone_append": false, 00:10:34.030 "compare": true, 00:10:34.030 "compare_and_write": true, 00:10:34.030 "abort": true, 00:10:34.030 "seek_hole": false, 00:10:34.030 "seek_data": false, 00:10:34.030 "copy": true, 00:10:34.030 "nvme_iov_md": false 00:10:34.030 }, 00:10:34.030 "memory_domains": [ 00:10:34.030 { 00:10:34.030 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:34.030 "dma_device_type": 0 00:10:34.030 } 00:10:34.030 ], 00:10:34.030 "driver_specific": { 00:10:34.030 "nvme": [ 00:10:34.030 { 00:10:34.030 "trid": { 00:10:34.030 "trtype": "RDMA", 00:10:34.030 "adrfam": "IPv4", 00:10:34.030 "traddr": "192.168.100.8", 00:10:34.030 "trsvcid": "4420", 00:10:34.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:34.030 }, 00:10:34.030 "ctrlr_data": { 00:10:34.030 "cntlid": 1, 00:10:34.030 "vendor_id": "0x8086", 00:10:34.030 "model_number": "SPDK bdev Controller", 00:10:34.030 "serial_number": "SPDK0", 00:10:34.030 "firmware_revision": "24.09", 00:10:34.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.030 "oacs": { 00:10:34.030 "security": 0, 00:10:34.030 "format": 0, 00:10:34.030 "firmware": 0, 00:10:34.030 "ns_manage": 0 00:10:34.030 }, 00:10:34.030 "multi_ctrlr": true, 00:10:34.030 "ana_reporting": false 00:10:34.030 }, 00:10:34.030 "vs": { 00:10:34.030 "nvme_version": "1.3" 00:10:34.030 }, 00:10:34.030 "ns_data": { 00:10:34.030 "id": 1, 00:10:34.030 "can_share": true 00:10:34.030 } 00:10:34.030 } 00:10:34.030 ], 00:10:34.030 "mp_policy": "active_passive" 00:10:34.030 } 00:10:34.031 } 00:10:34.031 ] 00:10:34.031 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:34.031 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2565032 00:10:34.031 07:16:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:34.290 Running I/O for 10 seconds... 00:10:35.227 Latency(us) 00:10:35.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.227 Nvme0n1 : 1.00 34944.00 136.50 0.00 0.00 0.00 0.00 0.00 00:10:35.227 =================================================================================================================== 00:10:35.227 Total : 34944.00 136.50 0.00 0.00 0.00 0.00 0.00 00:10:35.227 00:10:36.164 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:36.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.164 Nvme0n1 : 2.00 35376.00 138.19 0.00 0.00 0.00 0.00 0.00 00:10:36.164 =================================================================================================================== 00:10:36.164 Total : 35376.00 138.19 0.00 0.00 0.00 0.00 0.00 00:10:36.164 00:10:36.423 true 00:10:36.423 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:36.423 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:36.423 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:36.423 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:36.423 07:16:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2565032 00:10:37.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.361 Nvme0n1 : 3.00 35531.33 138.79 0.00 0.00 0.00 0.00 0.00 00:10:37.361 =================================================================================================================== 00:10:37.361 Total : 35531.33 138.79 0.00 0.00 0.00 0.00 0.00 00:10:37.361 00:10:38.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.298 Nvme0n1 : 4.00 35639.75 139.22 0.00 0.00 0.00 0.00 0.00 00:10:38.298 =================================================================================================================== 00:10:38.298 Total : 35639.75 139.22 0.00 0.00 0.00 0.00 0.00 00:10:38.298 00:10:39.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.235 Nvme0n1 : 5.00 35732.20 139.58 0.00 0.00 0.00 0.00 0.00 00:10:39.235 =================================================================================================================== 00:10:39.235 Total : 35732.20 139.58 0.00 0.00 0.00 0.00 0.00 00:10:39.235 00:10:40.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.172 Nvme0n1 : 6.00 35803.67 139.86 0.00 0.00 0.00 0.00 0.00 00:10:40.172 =================================================================================================================== 00:10:40.172 Total : 35803.67 139.86 0.00 0.00 0.00 0.00 0.00 00:10:40.172 00:10:41.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.549 Nvme0n1 : 7.00 35848.00 140.03 0.00 0.00 0.00 0.00 0.00 00:10:41.549 =================================================================================================================== 00:10:41.549 Total : 35848.00 140.03 0.00 0.00 0.00 0.00 0.00 00:10:41.549 00:10:42.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.486 Nvme0n1 : 8.00 35880.62 140.16 0.00 0.00 0.00 0.00 0.00 00:10:42.486 =================================================================================================================== 00:10:42.486 Total : 35880.62 140.16 0.00 0.00 0.00 0.00 0.00 00:10:42.486 00:10:43.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.421 Nvme0n1 : 9.00 35908.11 140.27 0.00 0.00 0.00 0.00 0.00 00:10:43.421 =================================================================================================================== 00:10:43.421 Total : 35908.11 140.27 0.00 0.00 0.00 0.00 0.00 00:10:43.421 00:10:44.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.387 Nvme0n1 : 10.00 35929.00 140.35 0.00 0.00 0.00 0.00 0.00 00:10:44.387 =================================================================================================================== 00:10:44.387 Total : 35929.00 140.35 0.00 0.00 0.00 0.00 0.00 00:10:44.387 00:10:44.387 00:10:44.387 Latency(us) 00:10:44.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.387 Nvme0n1 : 10.00 35928.93 140.35 0.00 0.00 3559.62 2503.48 17511.22 00:10:44.388 =================================================================================================================== 00:10:44.388 Total : 35928.93 140.35 0.00 0.00 3559.62 2503.48 17511.22 00:10:44.388 0 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2564902 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2564902 ']' 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2564902 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2564902 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2564902' 00:10:44.388 killing process with pid 2564902 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2564902 00:10:44.388 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.388 00:10:44.388 Latency(us) 00:10:44.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.388 =================================================================================================================== 00:10:44.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2564902 00:10:44.388 07:16:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:44.647 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.906 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:44.906 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:45.164 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.164 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:45.164 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:45.164 [2024-07-25 07:16:17.586434] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:45.164 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:45.164 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:45.165 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:45.424 request: 00:10:45.424 { 00:10:45.424 "uuid": "828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4", 00:10:45.424 "method": "bdev_lvol_get_lvstores", 00:10:45.424 "req_id": 1 00:10:45.424 } 00:10:45.424 Got JSON-RPC error response 00:10:45.424 response: 00:10:45.424 { 00:10:45.424 "code": -19, 00:10:45.424 "message": "No such device" 00:10:45.424 } 00:10:45.424 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:45.424 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.424 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.424 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.424 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:45.424 aio_bdev 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.685 07:16:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:45.685 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 -t 2000 00:10:45.993 [ 00:10:45.993 { 00:10:45.993 "name": "ab60f305-8ada-4dd2-95fe-eea4bb8f99f0", 00:10:45.993 "aliases": [ 00:10:45.993 "lvs/lvol" 00:10:45.993 ], 00:10:45.993 "product_name": "Logical Volume", 00:10:45.993 "block_size": 4096, 00:10:45.993 "num_blocks": 38912, 00:10:45.993 "uuid": "ab60f305-8ada-4dd2-95fe-eea4bb8f99f0", 00:10:45.993 "assigned_rate_limits": { 00:10:45.993 "rw_ios_per_sec": 0, 00:10:45.993 "rw_mbytes_per_sec": 0, 00:10:45.993 "r_mbytes_per_sec": 0, 00:10:45.993 "w_mbytes_per_sec": 0 00:10:45.993 }, 00:10:45.993 "claimed": false, 00:10:45.993 "zoned": false, 00:10:45.993 "supported_io_types": { 00:10:45.993 "read": true, 00:10:45.993 "write": true, 00:10:45.993 "unmap": true, 00:10:45.993 "flush": false, 00:10:45.993 "reset": true, 00:10:45.993 "nvme_admin": false, 00:10:45.993 "nvme_io": false, 00:10:45.993 "nvme_io_md": false, 00:10:45.993 "write_zeroes": true, 00:10:45.993 "zcopy": false, 00:10:45.993 "get_zone_info": false, 00:10:45.993 "zone_management": false, 00:10:45.993 "zone_append": false, 00:10:45.993 "compare": false, 00:10:45.993 "compare_and_write": false, 00:10:45.993 "abort": false, 00:10:45.993 "seek_hole": true, 00:10:45.993 "seek_data": true, 00:10:45.993 "copy": false, 00:10:45.993 "nvme_iov_md": false 00:10:45.993 }, 00:10:45.993 "driver_specific": { 00:10:45.993 "lvol": { 00:10:45.993 "lvol_store_uuid": "828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4", 00:10:45.993 "base_bdev": "aio_bdev", 00:10:45.993 "thin_provision": false, 00:10:45.993 "num_allocated_clusters": 38, 00:10:45.993 "snapshot": false, 00:10:45.993 "clone": false, 00:10:45.993 "esnap_clone": false 00:10:45.993 } 00:10:45.993 } 00:10:45.993 } 00:10:45.993 ] 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:45.993 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:46.252 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:46.253 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab60f305-8ada-4dd2-95fe-eea4bb8f99f0 00:10:46.253 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 828aa4d8-e6ea-41e6-8d09-5c75b2d09cd4 00:10:46.511 07:16:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.771 00:10:46.771 real 0m15.545s 00:10:46.771 user 0m15.415s 00:10:46.771 sys 0m1.192s 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:46.771 ************************************ 00:10:46.771 END TEST lvs_grow_clean 00:10:46.771 ************************************ 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.771 ************************************ 00:10:46.771 START TEST lvs_grow_dirty 00:10:46.771 ************************************ 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.771 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.030 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:47.030 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:47.289 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 lvol 150 00:10:47.548 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=427ab730-aa8b-4151-9ec5-9dc96b85219e 00:10:47.548 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:47.548 07:16:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:47.807 [2024-07-25 07:16:20.083310] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:47.807 [2024-07-25 07:16:20.083365] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:47.807 true 00:10:47.807 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:10:47.807 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:47.807 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:47.807 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:48.066 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 427ab730-aa8b-4151-9ec5-9dc96b85219e 00:10:48.325 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:48.325 [2024-07-25 07:16:20.745476] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:48.325 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2567667 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2567667 /var/tmp/bdevperf.sock 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2567667 ']' 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.584 07:16:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 [2024-07-25 07:16:20.950750] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:48.584 [2024-07-25 07:16:20.950810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567667 ] 00:10:48.584 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.584 [2024-07-25 07:16:21.034108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.584 [2024-07-25 07:16:21.105722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.522 07:16:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.522 07:16:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:49.522 07:16:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:49.522 Nvme0n1 00:10:49.522 07:16:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:49.781 [ 00:10:49.781 { 00:10:49.781 "name": "Nvme0n1", 00:10:49.781 "aliases": [ 00:10:49.781 "427ab730-aa8b-4151-9ec5-9dc96b85219e" 00:10:49.781 ], 00:10:49.781 "product_name": "NVMe disk", 00:10:49.781 "block_size": 4096, 00:10:49.781 "num_blocks": 38912, 00:10:49.781 "uuid": "427ab730-aa8b-4151-9ec5-9dc96b85219e", 00:10:49.781 "assigned_rate_limits": { 00:10:49.781 "rw_ios_per_sec": 0, 00:10:49.781 "rw_mbytes_per_sec": 0, 00:10:49.781 "r_mbytes_per_sec": 0, 00:10:49.781 "w_mbytes_per_sec": 0 00:10:49.781 }, 00:10:49.781 "claimed": false, 00:10:49.781 "zoned": false, 00:10:49.781 "supported_io_types": { 00:10:49.781 "read": true, 00:10:49.781 "write": true, 00:10:49.781 "unmap": true, 00:10:49.781 "flush": true, 00:10:49.782 "reset": true, 00:10:49.782 "nvme_admin": true, 00:10:49.782 "nvme_io": true, 00:10:49.782 "nvme_io_md": false, 00:10:49.782 "write_zeroes": true, 00:10:49.782 "zcopy": false, 00:10:49.782 "get_zone_info": false, 00:10:49.782 "zone_management": false, 00:10:49.782 "zone_append": false, 00:10:49.782 "compare": true, 00:10:49.782 "compare_and_write": true, 00:10:49.782 "abort": true, 00:10:49.782 "seek_hole": false, 00:10:49.782 "seek_data": false, 00:10:49.782 "copy": true, 00:10:49.782 "nvme_iov_md": false 00:10:49.782 }, 00:10:49.782 "memory_domains": [ 00:10:49.782 { 00:10:49.782 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:49.782 "dma_device_type": 0 00:10:49.782 } 00:10:49.782 ], 00:10:49.782 "driver_specific": { 00:10:49.782 "nvme": [ 00:10:49.782 { 00:10:49.782 "trid": { 00:10:49.782 "trtype": "RDMA", 00:10:49.782 "adrfam": "IPv4", 00:10:49.782 "traddr": "192.168.100.8", 00:10:49.782 "trsvcid": "4420", 00:10:49.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:49.782 }, 00:10:49.782 "ctrlr_data": { 00:10:49.782 "cntlid": 1, 00:10:49.782 "vendor_id": "0x8086", 00:10:49.782 "model_number": "SPDK bdev Controller", 00:10:49.782 "serial_number": "SPDK0", 00:10:49.782 "firmware_revision": "24.09", 00:10:49.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:49.782 "oacs": { 00:10:49.782 "security": 0, 00:10:49.782 "format": 0, 00:10:49.782 "firmware": 0, 00:10:49.782 "ns_manage": 0 00:10:49.782 }, 00:10:49.782 "multi_ctrlr": true, 00:10:49.782 "ana_reporting": false 00:10:49.782 }, 00:10:49.782 "vs": { 00:10:49.782 "nvme_version": "1.3" 00:10:49.782 }, 00:10:49.782 "ns_data": { 00:10:49.782 "id": 1, 00:10:49.782 "can_share": true 00:10:49.782 } 00:10:49.782 } 00:10:49.782 ], 00:10:49.782 "mp_policy": "active_passive" 00:10:49.782 } 00:10:49.782 } 00:10:49.782 ] 00:10:49.782 07:16:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2567799 00:10:49.782 07:16:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:49.782 07:16:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:49.782 Running I/O for 10 seconds... 00:10:51.160 Latency(us) 00:10:51.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.160 Nvme0n1 : 1.00 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:10:51.160 =================================================================================================================== 00:10:51.160 Total : 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:10:51.160 00:10:51.727 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:10:51.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.986 Nvme0n1 : 2.00 35392.50 138.25 0.00 0.00 0.00 0.00 0.00 00:10:51.986 =================================================================================================================== 00:10:51.986 Total : 35392.50 138.25 0.00 0.00 0.00 0.00 0.00 00:10:51.986 00:10:51.986 true 00:10:51.986 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:10:51.986 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:52.245 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:52.245 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:52.245 07:16:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2567799 00:10:52.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.813 Nvme0n1 : 3.00 35499.00 138.67 0.00 0.00 0.00 0.00 0.00 00:10:52.813 =================================================================================================================== 00:10:52.814 Total : 35499.00 138.67 0.00 0.00 0.00 0.00 0.00 00:10:52.814 00:10:54.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.192 Nvme0n1 : 4.00 35616.00 139.12 0.00 0.00 0.00 0.00 0.00 00:10:54.192 =================================================================================================================== 00:10:54.192 Total : 35616.00 139.12 0.00 0.00 0.00 0.00 0.00 00:10:54.192 00:10:55.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.130 Nvme0n1 : 5.00 35686.20 139.40 0.00 0.00 0.00 0.00 0.00 00:10:55.130 =================================================================================================================== 00:10:55.130 Total : 35686.20 139.40 0.00 0.00 0.00 0.00 0.00 00:10:55.130 00:10:56.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.067 Nvme0n1 : 6.00 35760.00 139.69 0.00 0.00 0.00 0.00 0.00 00:10:56.067 =================================================================================================================== 00:10:56.067 Total : 35760.00 139.69 0.00 0.00 0.00 0.00 0.00 00:10:56.067 00:10:57.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.004 Nvme0n1 : 7.00 35775.71 139.75 0.00 0.00 0.00 0.00 0.00 00:10:57.004 =================================================================================================================== 00:10:57.004 Total : 35775.71 139.75 0.00 0.00 0.00 0.00 0.00 00:10:57.004 00:10:57.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.940 Nvme0n1 : 8.00 35760.38 139.69 0.00 0.00 0.00 0.00 0.00 00:10:57.940 =================================================================================================================== 00:10:57.940 Total : 35760.38 139.69 0.00 0.00 0.00 0.00 0.00 00:10:57.940 00:10:58.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.873 Nvme0n1 : 9.00 35793.56 139.82 0.00 0.00 0.00 0.00 0.00 00:10:58.873 =================================================================================================================== 00:10:58.873 Total : 35793.56 139.82 0.00 0.00 0.00 0.00 0.00 00:10:58.873 00:10:59.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.806 Nvme0n1 : 10.00 35810.90 139.89 0.00 0.00 0.00 0.00 0.00 00:10:59.806 =================================================================================================================== 00:10:59.806 Total : 35810.90 139.89 0.00 0.00 0.00 0.00 0.00 00:10:59.806 00:10:59.806 00:10:59.806 Latency(us) 00:10:59.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.806 Nvme0n1 : 10.00 35810.02 139.88 0.00 0.00 3571.63 2346.19 12006.20 00:10:59.806 =================================================================================================================== 00:10:59.806 Total : 35810.02 139.88 0.00 0.00 3571.63 2346.19 12006.20 00:10:59.806 0 00:10:59.806 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2567667 00:10:59.806 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2567667 ']' 00:10:59.806 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2567667 00:10:59.806 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2567667 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2567667' 00:11:00.065 killing process with pid 2567667 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2567667 00:11:00.065 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.065 00:11:00.065 Latency(us) 00:11:00.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.065 =================================================================================================================== 00:11:00.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2567667 00:11:00.065 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:00.323 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.612 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:00.612 07:16:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:00.873 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2564400 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2564400 00:11:00.874 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2564400 Killed "${NVMF_APP[@]}" "$@" 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2569748 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2569748 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2569748 ']' 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.874 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 [2024-07-25 07:16:33.215670] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:00.874 [2024-07-25 07:16:33.215725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.874 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.874 [2024-07-25 07:16:33.301605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.874 [2024-07-25 07:16:33.373344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.874 [2024-07-25 07:16:33.373385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.874 [2024-07-25 07:16:33.373394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.874 [2024-07-25 07:16:33.373402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.874 [2024-07-25 07:16:33.373409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.874 [2024-07-25 07:16:33.373428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.805 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.805 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:01.805 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.805 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.805 07:16:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:01.805 [2024-07-25 07:16:34.194105] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:01.805 [2024-07-25 07:16:34.194185] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:01.805 [2024-07-25 07:16:34.194210] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 427ab730-aa8b-4151-9ec5-9dc96b85219e 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=427ab730-aa8b-4151-9ec5-9dc96b85219e 00:11:01.805 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.806 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:01.806 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.806 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.806 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:02.063 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 427ab730-aa8b-4151-9ec5-9dc96b85219e -t 2000 00:11:02.063 [ 00:11:02.063 { 00:11:02.063 "name": "427ab730-aa8b-4151-9ec5-9dc96b85219e", 00:11:02.063 "aliases": [ 00:11:02.063 "lvs/lvol" 00:11:02.063 ], 00:11:02.063 "product_name": "Logical Volume", 00:11:02.063 "block_size": 4096, 00:11:02.063 "num_blocks": 38912, 00:11:02.063 "uuid": "427ab730-aa8b-4151-9ec5-9dc96b85219e", 00:11:02.063 "assigned_rate_limits": { 00:11:02.063 "rw_ios_per_sec": 0, 00:11:02.063 "rw_mbytes_per_sec": 0, 00:11:02.063 "r_mbytes_per_sec": 0, 00:11:02.063 "w_mbytes_per_sec": 0 00:11:02.063 }, 00:11:02.063 "claimed": false, 00:11:02.063 "zoned": false, 00:11:02.063 "supported_io_types": { 00:11:02.063 "read": true, 00:11:02.063 "write": true, 00:11:02.063 "unmap": true, 00:11:02.063 "flush": false, 00:11:02.063 "reset": true, 00:11:02.063 "nvme_admin": false, 00:11:02.063 "nvme_io": false, 00:11:02.063 "nvme_io_md": false, 00:11:02.063 "write_zeroes": true, 00:11:02.063 "zcopy": false, 00:11:02.063 "get_zone_info": false, 00:11:02.063 "zone_management": false, 00:11:02.063 "zone_append": false, 00:11:02.063 "compare": false, 00:11:02.063 "compare_and_write": false, 00:11:02.063 "abort": false, 00:11:02.063 "seek_hole": true, 00:11:02.063 "seek_data": true, 00:11:02.063 "copy": false, 00:11:02.064 "nvme_iov_md": false 00:11:02.064 }, 00:11:02.064 "driver_specific": { 00:11:02.064 "lvol": { 00:11:02.064 "lvol_store_uuid": "3ae7e036-2f0a-43e1-a9e5-d908491637e3", 00:11:02.064 "base_bdev": "aio_bdev", 00:11:02.064 "thin_provision": false, 00:11:02.064 "num_allocated_clusters": 38, 00:11:02.064 "snapshot": false, 00:11:02.064 "clone": false, 00:11:02.064 "esnap_clone": false 00:11:02.064 } 00:11:02.064 } 00:11:02.064 } 00:11:02.064 ] 00:11:02.064 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:02.064 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:02.064 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:02.322 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:02.322 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:02.322 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:02.580 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:02.580 07:16:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:02.580 [2024-07-25 07:16:35.014385] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:02.580 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:02.580 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:02.580 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:02.581 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:02.839 request: 00:11:02.839 { 00:11:02.839 "uuid": "3ae7e036-2f0a-43e1-a9e5-d908491637e3", 00:11:02.839 "method": "bdev_lvol_get_lvstores", 00:11:02.839 "req_id": 1 00:11:02.839 } 00:11:02.839 Got JSON-RPC error response 00:11:02.839 response: 00:11:02.839 { 00:11:02.839 "code": -19, 00:11:02.839 "message": "No such device" 00:11:02.839 } 00:11:02.839 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:02.839 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.839 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.839 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.839 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:03.098 aio_bdev 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 427ab730-aa8b-4151-9ec5-9dc96b85219e 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=427ab730-aa8b-4151-9ec5-9dc96b85219e 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:03.098 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 427ab730-aa8b-4151-9ec5-9dc96b85219e -t 2000 00:11:03.356 [ 00:11:03.356 { 00:11:03.356 "name": "427ab730-aa8b-4151-9ec5-9dc96b85219e", 00:11:03.356 "aliases": [ 00:11:03.356 "lvs/lvol" 00:11:03.356 ], 00:11:03.356 "product_name": "Logical Volume", 00:11:03.356 "block_size": 4096, 00:11:03.356 "num_blocks": 38912, 00:11:03.356 "uuid": "427ab730-aa8b-4151-9ec5-9dc96b85219e", 00:11:03.356 "assigned_rate_limits": { 00:11:03.356 "rw_ios_per_sec": 0, 00:11:03.356 "rw_mbytes_per_sec": 0, 00:11:03.356 "r_mbytes_per_sec": 0, 00:11:03.356 "w_mbytes_per_sec": 0 00:11:03.356 }, 00:11:03.356 "claimed": false, 00:11:03.356 "zoned": false, 00:11:03.356 "supported_io_types": { 00:11:03.356 "read": true, 00:11:03.356 "write": true, 00:11:03.356 "unmap": true, 00:11:03.356 "flush": false, 00:11:03.356 "reset": true, 00:11:03.356 "nvme_admin": false, 00:11:03.356 "nvme_io": false, 00:11:03.356 "nvme_io_md": false, 00:11:03.356 "write_zeroes": true, 00:11:03.356 "zcopy": false, 00:11:03.356 "get_zone_info": false, 00:11:03.356 "zone_management": false, 00:11:03.356 "zone_append": false, 00:11:03.356 "compare": false, 00:11:03.356 "compare_and_write": false, 00:11:03.356 "abort": false, 00:11:03.356 "seek_hole": true, 00:11:03.356 "seek_data": true, 00:11:03.356 "copy": false, 00:11:03.356 "nvme_iov_md": false 00:11:03.356 }, 00:11:03.356 "driver_specific": { 00:11:03.356 "lvol": { 00:11:03.356 "lvol_store_uuid": "3ae7e036-2f0a-43e1-a9e5-d908491637e3", 00:11:03.356 "base_bdev": "aio_bdev", 00:11:03.356 "thin_provision": false, 00:11:03.356 "num_allocated_clusters": 38, 00:11:03.356 "snapshot": false, 00:11:03.356 "clone": false, 00:11:03.356 "esnap_clone": false 00:11:03.356 } 00:11:03.356 } 00:11:03.356 } 00:11:03.356 ] 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:03.356 07:16:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:03.614 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:03.614 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 427ab730-aa8b-4151-9ec5-9dc96b85219e 00:11:03.872 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ae7e036-2f0a-43e1-a9e5-d908491637e3 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:04.130 00:11:04.130 real 0m17.393s 00:11:04.130 user 0m44.952s 00:11:04.130 sys 0m3.360s 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:04.130 ************************************ 00:11:04.130 END TEST lvs_grow_dirty 00:11:04.130 ************************************ 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:04.130 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:04.389 nvmf_trace.0 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:04.389 rmmod nvme_rdma 00:11:04.389 rmmod nvme_fabrics 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2569748 ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2569748 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2569748 ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2569748 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569748 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569748' 00:11:04.389 killing process with pid 2569748 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2569748 00:11:04.389 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2569748 00:11:04.648 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.648 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:04.648 00:11:04.648 real 0m42.929s 00:11:04.648 user 1m6.810s 00:11:04.648 sys 0m11.340s 00:11:04.648 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.648 07:16:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:04.648 ************************************ 00:11:04.648 END TEST nvmf_lvs_grow 00:11:04.648 ************************************ 00:11:04.648 07:16:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:04.648 07:16:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.648 07:16:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.648 07:16:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.648 ************************************ 00:11:04.648 START TEST nvmf_bdev_io_wait 00:11:04.648 ************************************ 00:11:04.648 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:04.907 * Looking for test storage... 00:11:04.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:04.907 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.908 07:16:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:14.878 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:14.878 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:14.878 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:14.878 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:14.878 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:14.879 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.879 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:14.879 altname enp217s0f0np0 00:11:14.879 altname ens818f0np0 00:11:14.879 inet 192.168.100.8/24 scope global mlx_0_0 00:11:14.879 valid_lft forever preferred_lft forever 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:14.879 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.879 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:14.879 altname enp217s0f1np1 00:11:14.879 altname ens818f1np1 00:11:14.879 inet 192.168.100.9/24 scope global mlx_0_1 00:11:14.879 valid_lft forever preferred_lft forever 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:14.879 07:16:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:14.879 192.168.100.9' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:14.879 192.168.100.9' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:14.879 192.168.100.9' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2574624 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2574624 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2574624 ']' 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 [2024-07-25 07:16:46.138828] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:14.879 [2024-07-25 07:16:46.138893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.879 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.879 [2024-07-25 07:16:46.225765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.879 [2024-07-25 07:16:46.296254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.879 [2024-07-25 07:16:46.296297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.879 [2024-07-25 07:16:46.296307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.879 [2024-07-25 07:16:46.296315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.879 [2024-07-25 07:16:46.296338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.879 [2024-07-25 07:16:46.296394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.879 [2024-07-25 07:16:46.296510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.879 [2024-07-25 07:16:46.296735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.879 [2024-07-25 07:16:46.296737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.879 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.880 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:14.880 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 [2024-07-25 07:16:47.091193] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d1fff0/0x1d244e0) succeed. 00:11:14.880 [2024-07-25 07:16:47.099952] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d21630/0x1d65b70) succeed. 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 Malloc0 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.880 [2024-07-25 07:16:47.279077] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2574913 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2574915 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:14.880 { 00:11:14.880 "params": { 00:11:14.880 "name": "Nvme$subsystem", 00:11:14.880 "trtype": "$TEST_TRANSPORT", 00:11:14.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.880 "adrfam": "ipv4", 00:11:14.880 "trsvcid": "$NVMF_PORT", 00:11:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.880 "hdgst": ${hdgst:-false}, 00:11:14.880 "ddgst": ${ddgst:-false} 00:11:14.880 }, 00:11:14.880 "method": "bdev_nvme_attach_controller" 00:11:14.880 } 00:11:14.880 EOF 00:11:14.880 )") 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2574917 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:14.880 { 00:11:14.880 "params": { 00:11:14.880 "name": "Nvme$subsystem", 00:11:14.880 "trtype": "$TEST_TRANSPORT", 00:11:14.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.880 "adrfam": "ipv4", 00:11:14.880 "trsvcid": "$NVMF_PORT", 00:11:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.880 "hdgst": ${hdgst:-false}, 00:11:14.880 "ddgst": ${ddgst:-false} 00:11:14.880 }, 00:11:14.880 "method": "bdev_nvme_attach_controller" 00:11:14.880 } 00:11:14.880 EOF 00:11:14.880 )") 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2574920 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:14.880 { 00:11:14.880 "params": { 00:11:14.880 "name": "Nvme$subsystem", 00:11:14.880 "trtype": "$TEST_TRANSPORT", 00:11:14.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.880 "adrfam": "ipv4", 00:11:14.880 "trsvcid": "$NVMF_PORT", 00:11:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.880 "hdgst": ${hdgst:-false}, 00:11:14.880 "ddgst": ${ddgst:-false} 00:11:14.880 }, 00:11:14.880 "method": "bdev_nvme_attach_controller" 00:11:14.880 } 00:11:14.880 EOF 00:11:14.880 )") 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:14.880 { 00:11:14.880 "params": { 00:11:14.880 "name": "Nvme$subsystem", 00:11:14.880 "trtype": "$TEST_TRANSPORT", 00:11:14.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.880 "adrfam": "ipv4", 00:11:14.880 "trsvcid": "$NVMF_PORT", 00:11:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.880 "hdgst": ${hdgst:-false}, 00:11:14.880 "ddgst": ${ddgst:-false} 00:11:14.880 }, 00:11:14.880 "method": "bdev_nvme_attach_controller" 00:11:14.880 } 00:11:14.880 EOF 00:11:14.880 )") 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2574913 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:14.880 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:14.881 "params": { 00:11:14.881 "name": "Nvme1", 00:11:14.881 "trtype": "rdma", 00:11:14.881 "traddr": "192.168.100.8", 00:11:14.881 "adrfam": "ipv4", 00:11:14.881 "trsvcid": "4420", 00:11:14.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.881 "hdgst": false, 00:11:14.881 "ddgst": false 00:11:14.881 }, 00:11:14.881 "method": "bdev_nvme_attach_controller" 00:11:14.881 }' 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:14.881 "params": { 00:11:14.881 "name": "Nvme1", 00:11:14.881 "trtype": "rdma", 00:11:14.881 "traddr": "192.168.100.8", 00:11:14.881 "adrfam": "ipv4", 00:11:14.881 "trsvcid": "4420", 00:11:14.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.881 "hdgst": false, 00:11:14.881 "ddgst": false 00:11:14.881 }, 00:11:14.881 "method": "bdev_nvme_attach_controller" 00:11:14.881 }' 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:14.881 "params": { 00:11:14.881 "name": "Nvme1", 00:11:14.881 "trtype": "rdma", 00:11:14.881 "traddr": "192.168.100.8", 00:11:14.881 "adrfam": "ipv4", 00:11:14.881 "trsvcid": "4420", 00:11:14.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.881 "hdgst": false, 00:11:14.881 "ddgst": false 00:11:14.881 }, 00:11:14.881 "method": "bdev_nvme_attach_controller" 00:11:14.881 }' 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:14.881 07:16:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:14.881 "params": { 00:11:14.881 "name": "Nvme1", 00:11:14.881 "trtype": "rdma", 00:11:14.881 "traddr": "192.168.100.8", 00:11:14.881 "adrfam": "ipv4", 00:11:14.881 "trsvcid": "4420", 00:11:14.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.881 "hdgst": false, 00:11:14.881 "ddgst": false 00:11:14.881 }, 00:11:14.881 "method": "bdev_nvme_attach_controller" 00:11:14.881 }' 00:11:14.881 [2024-07-25 07:16:47.330855] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:14.881 [2024-07-25 07:16:47.330856] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:14.881 [2024-07-25 07:16:47.330911] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 07:16:47.330912] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:14.881 --proc-type=auto ] 00:11:14.881 [2024-07-25 07:16:47.332825] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:14.881 [2024-07-25 07:16:47.332874] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:14.881 [2024-07-25 07:16:47.333862] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:14.881 [2024-07-25 07:16:47.333907] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:14.881 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.138 [2024-07-25 07:16:47.542470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.138 [2024-07-25 07:16:47.617031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:11:15.138 [2024-07-25 07:16:47.633086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.395 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.395 [2024-07-25 07:16:47.707330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:15.395 [2024-07-25 07:16:47.728706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.395 [2024-07-25 07:16:47.770674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.395 [2024-07-25 07:16:47.814713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:15.395 [2024-07-25 07:16:47.845836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:15.395 Running I/O for 1 seconds... 00:11:15.395 Running I/O for 1 seconds... 00:11:15.652 Running I/O for 1 seconds... 00:11:15.652 Running I/O for 1 seconds... 00:11:16.581 00:11:16.581 Latency(us) 00:11:16.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.581 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:16.582 Nvme1n1 : 1.00 21221.85 82.90 0.00 0.00 6015.55 3774.87 14260.63 00:11:16.582 =================================================================================================================== 00:11:16.582 Total : 21221.85 82.90 0.00 0.00 6015.55 3774.87 14260.63 00:11:16.582 00:11:16.582 Latency(us) 00:11:16.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.582 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:16.582 Nvme1n1 : 1.01 15201.96 59.38 0.00 0.00 8392.23 5636.10 18350.08 00:11:16.582 =================================================================================================================== 00:11:16.582 Total : 15201.96 59.38 0.00 0.00 8392.23 5636.10 18350.08 00:11:16.582 00:11:16.582 Latency(us) 00:11:16.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.582 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:16.582 Nvme1n1 : 1.00 15595.78 60.92 0.00 0.00 8187.79 3801.09 20447.23 00:11:16.582 =================================================================================================================== 00:11:16.582 Total : 15595.78 60.92 0.00 0.00 8187.79 3801.09 20447.23 00:11:16.582 00:11:16.582 Latency(us) 00:11:16.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.582 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:16.582 Nvme1n1 : 1.00 261697.19 1022.25 0.00 0.00 486.75 194.15 1939.87 00:11:16.582 =================================================================================================================== 00:11:16.582 Total : 261697.19 1022.25 0.00 0.00 486.75 194.15 1939.87 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2574915 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2574917 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2574920 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.841 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:16.841 rmmod nvme_rdma 00:11:17.099 rmmod nvme_fabrics 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2574624 ']' 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2574624 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2574624 ']' 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2574624 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2574624 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2574624' 00:11:17.099 killing process with pid 2574624 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2574624 00:11:17.099 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2574624 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:17.357 00:11:17.357 real 0m12.643s 00:11:17.357 user 0m21.892s 00:11:17.357 sys 0m8.110s 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:17.357 ************************************ 00:11:17.357 END TEST nvmf_bdev_io_wait 00:11:17.357 ************************************ 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.357 ************************************ 00:11:17.357 START TEST nvmf_queue_depth 00:11:17.357 ************************************ 00:11:17.357 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:17.615 * Looking for test storage... 00:11:17.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:17.615 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.616 07:16:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:25.799 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:25.799 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:25.799 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:25.799 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.799 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:25.800 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.800 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:25.800 altname enp217s0f0np0 00:11:25.800 altname ens818f0np0 00:11:25.800 inet 192.168.100.8/24 scope global mlx_0_0 00:11:25.800 valid_lft forever preferred_lft forever 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:25.800 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.800 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:25.800 altname enp217s0f1np1 00:11:25.800 altname ens818f1np1 00:11:25.800 inet 192.168.100.9/24 scope global mlx_0_1 00:11:25.800 valid_lft forever preferred_lft forever 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:25.800 192.168.100.9' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:25.800 192.168.100.9' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:25.800 192.168.100.9' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2579379 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2579379 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2579379 ']' 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.800 07:16:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.060 [2024-07-25 07:16:58.371389] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:26.060 [2024-07-25 07:16:58.371446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.060 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.060 [2024-07-25 07:16:58.452773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.060 [2024-07-25 07:16:58.523132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.060 [2024-07-25 07:16:58.523173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.060 [2024-07-25 07:16:58.523182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.060 [2024-07-25 07:16:58.523190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.060 [2024-07-25 07:16:58.523213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.060 [2024-07-25 07:16:58.523234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 [2024-07-25 07:16:59.238691] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x242be90/0x2430380) succeed. 00:11:26.999 [2024-07-25 07:16:59.247923] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x242d390/0x2471a10) succeed. 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 Malloc0 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 [2024-07-25 07:16:59.349843] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2579609 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2579609 /var/tmp/bdevperf.sock 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2579609 ']' 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.999 07:16:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 [2024-07-25 07:16:59.401726] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:26.999 [2024-07-25 07:16:59.401784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579609 ] 00:11:26.999 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.999 [2024-07-25 07:16:59.486295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.258 [2024-07-25 07:16:59.559668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 NVMe0n1 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 07:17:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:28.086 Running I/O for 10 seconds... 00:11:38.070 00:11:38.070 Latency(us) 00:11:38.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.070 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:38.070 Verification LBA range: start 0x0 length 0x4000 00:11:38.070 NVMe0n1 : 10.04 18226.39 71.20 0.00 0.00 56020.15 16462.64 36700.16 00:11:38.070 =================================================================================================================== 00:11:38.070 Total : 18226.39 71.20 0.00 0.00 56020.15 16462.64 36700.16 00:11:38.070 0 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2579609 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2579609 ']' 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2579609 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2579609 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2579609' 00:11:38.070 killing process with pid 2579609 00:11:38.070 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2579609 00:11:38.070 Received shutdown signal, test time was about 10.000000 seconds 00:11:38.070 00:11:38.070 Latency(us) 00:11:38.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.070 =================================================================================================================== 00:11:38.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:38.071 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2579609 00:11:38.329 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:38.329 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:38.329 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:38.330 rmmod nvme_rdma 00:11:38.330 rmmod nvme_fabrics 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2579379 ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2579379 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2579379 ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2579379 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2579379 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2579379' 00:11:38.330 killing process with pid 2579379 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2579379 00:11:38.330 07:17:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2579379 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:38.589 00:11:38.589 real 0m21.265s 00:11:38.589 user 0m26.655s 00:11:38.589 sys 0m7.108s 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:38.589 ************************************ 00:11:38.589 END TEST nvmf_queue_depth 00:11:38.589 ************************************ 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.589 07:17:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.848 ************************************ 00:11:38.848 START TEST nvmf_target_multipath 00:11:38.848 ************************************ 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:38.848 * Looking for test storage... 00:11:38.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.848 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.849 07:17:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:46.972 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:46.972 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.972 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:46.973 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:46.973 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:46.973 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:46.973 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:46.973 altname enp217s0f0np0 00:11:46.973 altname ens818f0np0 00:11:46.973 inet 192.168.100.8/24 scope global mlx_0_0 00:11:46.973 valid_lft forever preferred_lft forever 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:46.973 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:46.973 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:46.973 altname enp217s0f1np1 00:11:46.973 altname ens818f1np1 00:11:46.973 inet 192.168.100.9/24 scope global mlx_0_1 00:11:46.973 valid_lft forever preferred_lft forever 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:46.973 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:46.974 192.168.100.9' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:46.974 192.168.100.9' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:46.974 192.168.100.9' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:46.974 run this test only with TCP transport for now 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.974 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:47.234 rmmod nvme_rdma 00:11:47.234 rmmod nvme_fabrics 00:11:47.234 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.234 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:47.234 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:47.234 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:47.234 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:47.235 00:11:47.235 real 0m8.434s 00:11:47.235 user 0m2.265s 00:11:47.235 sys 0m6.380s 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:47.235 ************************************ 00:11:47.235 END TEST nvmf_target_multipath 00:11:47.235 ************************************ 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.235 ************************************ 00:11:47.235 START TEST nvmf_zcopy 00:11:47.235 ************************************ 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:47.235 * Looking for test storage... 00:11:47.235 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:47.235 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.495 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.496 07:17:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:55.628 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:55.628 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:55.628 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:55.628 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:55.628 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:55.888 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.888 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:55.888 altname enp217s0f0np0 00:11:55.888 altname ens818f0np0 00:11:55.888 inet 192.168.100.8/24 scope global mlx_0_0 00:11:55.888 valid_lft forever preferred_lft forever 00:11:55.888 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:55.889 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.889 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:55.889 altname enp217s0f1np1 00:11:55.889 altname ens818f1np1 00:11:55.889 inet 192.168.100.9/24 scope global mlx_0_1 00:11:55.889 valid_lft forever preferred_lft forever 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:55.889 192.168.100.9' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:55.889 192.168.100.9' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:55.889 192.168.100.9' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2589606 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2589606 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2589606 ']' 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.889 07:17:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.149 [2024-07-25 07:17:28.443234] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:56.149 [2024-07-25 07:17:28.443292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.149 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.149 [2024-07-25 07:17:28.527588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.149 [2024-07-25 07:17:28.602884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.149 [2024-07-25 07:17:28.602917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.149 [2024-07-25 07:17:28.602927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.149 [2024-07-25 07:17:28.602936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.149 [2024-07-25 07:17:28.602943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.149 [2024-07-25 07:17:28.602965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.718 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.718 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:56.718 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.718 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.718 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:56.978 Unsupported transport: rdma 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:56.978 nvmf_trace.0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:56.978 rmmod nvme_rdma 00:11:56.978 rmmod nvme_fabrics 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2589606 ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2589606 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2589606 ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2589606 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2589606 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2589606' 00:11:56.978 killing process with pid 2589606 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2589606 00:11:56.978 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2589606 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:57.237 00:11:57.237 real 0m9.976s 00:11:57.237 user 0m3.923s 00:11:57.237 sys 0m6.862s 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 ************************************ 00:11:57.237 END TEST nvmf_zcopy 00:11:57.237 ************************************ 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 ************************************ 00:11:57.237 START TEST nvmf_nmic 00:11:57.237 ************************************ 00:11:57.237 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:57.497 * Looking for test storage... 00:11:57.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.497 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.498 07:17:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.625 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:05.626 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:05.626 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:05.626 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:05.626 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:05.626 07:17:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:05.626 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:05.626 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:05.626 altname enp217s0f0np0 00:12:05.626 altname ens818f0np0 00:12:05.626 inet 192.168.100.8/24 scope global mlx_0_0 00:12:05.626 valid_lft forever preferred_lft forever 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:05.626 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:05.626 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:05.626 altname enp217s0f1np1 00:12:05.626 altname ens818f1np1 00:12:05.626 inet 192.168.100.9/24 scope global mlx_0_1 00:12:05.626 valid_lft forever preferred_lft forever 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:05.626 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:05.627 192.168.100.9' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:05.627 192.168.100.9' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:05.627 192.168.100.9' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:05.627 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2593872 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2593872 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2593872 ']' 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.887 07:17:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.887 [2024-07-25 07:17:38.238539] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:05.887 [2024-07-25 07:17:38.238594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.887 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.887 [2024-07-25 07:17:38.324670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.887 [2024-07-25 07:17:38.403073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.887 [2024-07-25 07:17:38.403110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.887 [2024-07-25 07:17:38.403120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.887 [2024-07-25 07:17:38.403129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.887 [2024-07-25 07:17:38.403135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.887 [2024-07-25 07:17:38.403183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.887 [2024-07-25 07:17:38.403203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.887 [2024-07-25 07:17:38.403277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.887 [2024-07-25 07:17:38.403279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 [2024-07-25 07:17:39.123763] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd31dd0/0xd362c0) succeed. 00:12:06.823 [2024-07-25 07:17:39.133171] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd33410/0xd77950) succeed. 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 Malloc0 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.823 [2024-07-25 07:17:39.299904] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.823 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:06.824 test case1: single bdev can't be used in multiple subsystems 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.824 [2024-07-25 07:17:39.327687] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:06.824 [2024-07-25 07:17:39.327708] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:06.824 [2024-07-25 07:17:39.327717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.824 request: 00:12:06.824 { 00:12:06.824 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:06.824 "namespace": { 00:12:06.824 "bdev_name": "Malloc0", 00:12:06.824 "no_auto_visible": false 00:12:06.824 }, 00:12:06.824 "method": "nvmf_subsystem_add_ns", 00:12:06.824 "req_id": 1 00:12:06.824 } 00:12:06.824 Got JSON-RPC error response 00:12:06.824 response: 00:12:06.824 { 00:12:06.824 "code": -32602, 00:12:06.824 "message": "Invalid parameters" 00:12:06.824 } 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:06.824 Adding namespace failed - expected result. 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:06.824 test case2: host connect to nvmf target in multiple paths 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.824 [2024-07-25 07:17:39.343768] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.824 07:17:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:08.201 07:17:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:12:09.137 07:17:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.137 07:17:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:09.137 07:17:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.137 07:17:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:09.137 07:17:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:11.039 07:17:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:11.039 [global] 00:12:11.039 thread=1 00:12:11.039 invalidate=1 00:12:11.039 rw=write 00:12:11.039 time_based=1 00:12:11.039 runtime=1 00:12:11.039 ioengine=libaio 00:12:11.039 direct=1 00:12:11.039 bs=4096 00:12:11.039 iodepth=1 00:12:11.039 norandommap=0 00:12:11.039 numjobs=1 00:12:11.039 00:12:11.039 verify_dump=1 00:12:11.039 verify_backlog=512 00:12:11.039 verify_state_save=0 00:12:11.039 do_verify=1 00:12:11.039 verify=crc32c-intel 00:12:11.039 [job0] 00:12:11.039 filename=/dev/nvme0n1 00:12:11.039 Could not set queue depth (nvme0n1) 00:12:11.297 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.297 fio-3.35 00:12:11.297 Starting 1 thread 00:12:12.676 00:12:12.676 job0: (groupid=0, jobs=1): err= 0: pid=2595035: Thu Jul 25 07:17:44 2024 00:12:12.676 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:12:12.676 slat (nsec): min=8272, max=33818, avg=9060.46, stdev=805.71 00:12:12.676 clat (usec): min=45, max=141, avg=58.59, stdev= 3.54 00:12:12.676 lat (usec): min=58, max=150, avg=67.65, stdev= 3.61 00:12:12.676 clat percentiles (usec): 00:12:12.676 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:12:12.676 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:12:12.676 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 65], 00:12:12.676 | 99.00th=[ 68], 99.50th=[ 69], 99.90th=[ 72], 99.95th=[ 75], 00:12:12.676 | 99.99th=[ 141] 00:12:12.676 write: IOPS=7218, BW=28.2MiB/s (29.6MB/s)(28.2MiB/1001msec); 0 zone resets 00:12:12.676 slat (nsec): min=10061, max=44884, avg=10714.98, stdev=1098.27 00:12:12.676 clat (usec): min=42, max=102, avg=56.75, stdev= 3.59 00:12:12.676 lat (usec): min=58, max=147, avg=67.47, stdev= 3.76 00:12:12.676 clat percentiles (usec): 00:12:12.676 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:12:12.676 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:12:12.676 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:12:12.676 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 73], 99.95th=[ 80], 00:12:12.676 | 99.99th=[ 103] 00:12:12.676 bw ( KiB/s): min=28934, max=28934, per=100.00%, avg=28934.00, stdev= 0.00, samples=1 00:12:12.676 iops : min= 7233, max= 7233, avg=7233.00, stdev= 0.00, samples=1 00:12:12.676 lat (usec) : 50=0.65%, 100=99.33%, 250=0.01% 00:12:12.676 cpu : usr=11.00%, sys=18.00%, ctx=14394, majf=0, minf=2 00:12:12.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.676 issued rwts: total=7168,7226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.676 00:12:12.676 Run status group 0 (all jobs): 00:12:12.676 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:12:12.676 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=28.2MiB (29.6MB), run=1001-1001msec 00:12:12.676 00:12:12.676 Disk stats (read/write): 00:12:12.676 nvme0n1: ios=6327/6656, merge=0/0, ticks=326/315, in_queue=641, util=90.48% 00:12:12.676 07:17:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:14.583 rmmod nvme_rdma 00:12:14.583 rmmod nvme_fabrics 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2593872 ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2593872 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2593872 ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2593872 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2593872 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2593872' 00:12:14.583 killing process with pid 2593872 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2593872 00:12:14.583 07:17:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2593872 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:14.843 00:12:14.843 real 0m17.468s 00:12:14.843 user 0m45.340s 00:12:14.843 sys 0m7.396s 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 ************************************ 00:12:14.843 END TEST nvmf_nmic 00:12:14.843 ************************************ 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 ************************************ 00:12:14.843 START TEST nvmf_fio_target 00:12:14.843 ************************************ 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:14.843 * Looking for test storage... 00:12:14.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.843 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.844 07:17:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:24.832 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:24.832 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:24.832 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:24.832 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.832 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:24.833 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:24.833 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:24.833 altname enp217s0f0np0 00:12:24.833 altname ens818f0np0 00:12:24.833 inet 192.168.100.8/24 scope global mlx_0_0 00:12:24.833 valid_lft forever preferred_lft forever 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:24.833 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:24.833 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:24.833 altname enp217s0f1np1 00:12:24.833 altname ens818f1np1 00:12:24.833 inet 192.168.100.9/24 scope global mlx_0_1 00:12:24.833 valid_lft forever preferred_lft forever 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:24.833 192.168.100.9' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:24.833 192.168.100.9' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:24.833 192.168.100.9' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2599743 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2599743 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2599743 ']' 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.833 07:17:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 [2024-07-25 07:17:55.823217] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:24.833 [2024-07-25 07:17:55.823264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.833 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.833 [2024-07-25 07:17:55.905111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.833 [2024-07-25 07:17:55.978800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.833 [2024-07-25 07:17:55.978837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.833 [2024-07-25 07:17:55.978847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.834 [2024-07-25 07:17:55.978854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.834 [2024-07-25 07:17:55.978861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.834 [2024-07-25 07:17:55.978925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.834 [2024-07-25 07:17:55.979018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.834 [2024-07-25 07:17:55.979103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.834 [2024-07-25 07:17:55.979105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.834 07:17:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:24.834 [2024-07-25 07:17:56.863847] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10b5dd0/0x10ba2c0) succeed. 00:12:24.834 [2024-07-25 07:17:56.872945] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10b7410/0x10fb950) succeed. 00:12:24.834 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.834 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:24.834 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.095 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:25.095 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.095 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:25.095 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.355 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:25.355 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:25.614 07:17:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.873 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:25.873 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.132 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:26.133 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.133 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:26.133 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:26.392 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.652 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.652 07:17:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.652 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.652 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.911 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.170 [2024-07-25 07:17:59.496484] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.170 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:27.429 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:27.429 07:17:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:28.398 07:18:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:30.938 07:18:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:30.938 [global] 00:12:30.938 thread=1 00:12:30.938 invalidate=1 00:12:30.938 rw=write 00:12:30.938 time_based=1 00:12:30.938 runtime=1 00:12:30.938 ioengine=libaio 00:12:30.938 direct=1 00:12:30.938 bs=4096 00:12:30.938 iodepth=1 00:12:30.938 norandommap=0 00:12:30.938 numjobs=1 00:12:30.938 00:12:30.938 verify_dump=1 00:12:30.938 verify_backlog=512 00:12:30.938 verify_state_save=0 00:12:30.938 do_verify=1 00:12:30.938 verify=crc32c-intel 00:12:30.938 [job0] 00:12:30.938 filename=/dev/nvme0n1 00:12:30.938 [job1] 00:12:30.938 filename=/dev/nvme0n2 00:12:30.938 [job2] 00:12:30.938 filename=/dev/nvme0n3 00:12:30.938 [job3] 00:12:30.938 filename=/dev/nvme0n4 00:12:30.938 Could not set queue depth (nvme0n1) 00:12:30.938 Could not set queue depth (nvme0n2) 00:12:30.938 Could not set queue depth (nvme0n3) 00:12:30.938 Could not set queue depth (nvme0n4) 00:12:30.938 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.938 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.938 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.938 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.938 fio-3.35 00:12:30.938 Starting 4 threads 00:12:32.345 00:12:32.345 job0: (groupid=0, jobs=1): err= 0: pid=2601329: Thu Jul 25 07:18:04 2024 00:12:32.345 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:32.345 slat (nsec): min=8123, max=38951, avg=9027.50, stdev=1043.87 00:12:32.345 clat (usec): min=72, max=196, avg=127.34, stdev=13.35 00:12:32.345 lat (usec): min=81, max=205, avg=136.37, stdev=13.31 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 93], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 119], 00:12:32.345 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:12:32.345 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 151], 00:12:32.345 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 196], 00:12:32.345 | 99.99th=[ 196] 00:12:32.345 write: IOPS=3776, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:12:32.345 slat (nsec): min=9951, max=71770, avg=11136.08, stdev=1413.12 00:12:32.345 clat (usec): min=63, max=194, avg=119.37, stdev=14.62 00:12:32.345 lat (usec): min=74, max=205, avg=130.51, stdev=14.66 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 82], 5.00th=[ 98], 10.00th=[ 103], 20.00th=[ 110], 00:12:32.345 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:12:32.345 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 145], 00:12:32.345 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 192], 00:12:32.345 | 99.99th=[ 194] 00:12:32.345 bw ( KiB/s): min=16384, max=16384, per=23.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:32.345 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:32.345 lat (usec) : 100=4.20%, 250=95.80% 00:12:32.345 cpu : usr=6.10%, sys=9.00%, ctx=7365, majf=0, minf=1 00:12:32.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 issued rwts: total=3584,3780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.345 job1: (groupid=0, jobs=1): err= 0: pid=2601343: Thu Jul 25 07:18:04 2024 00:12:32.345 read: IOPS=4672, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1001msec) 00:12:32.345 slat (nsec): min=8031, max=27014, avg=8875.13, stdev=842.80 00:12:32.345 clat (usec): min=65, max=254, avg=91.35, stdev=20.87 00:12:32.345 lat (usec): min=75, max=263, avg=100.23, stdev=21.06 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:12:32.345 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:12:32.345 | 70.00th=[ 89], 80.00th=[ 93], 90.00th=[ 135], 95.00th=[ 147], 00:12:32.345 | 99.00th=[ 161], 99.50th=[ 172], 99.90th=[ 196], 99.95th=[ 200], 00:12:32.345 | 99.99th=[ 255] 00:12:32.345 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:32.345 slat (nsec): min=9920, max=43647, avg=10726.43, stdev=1235.37 00:12:32.345 clat (usec): min=63, max=193, avg=89.24, stdev=21.04 00:12:32.345 lat (usec): min=74, max=203, avg=99.97, stdev=21.34 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:12:32.345 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 84], 00:12:32.345 | 70.00th=[ 87], 80.00th=[ 93], 90.00th=[ 131], 95.00th=[ 139], 00:12:32.345 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 182], 99.95th=[ 188], 00:12:32.345 | 99.99th=[ 194] 00:12:32.345 bw ( KiB/s): min=19592, max=19592, per=27.54%, avg=19592.00, stdev= 0.00, samples=1 00:12:32.345 iops : min= 4898, max= 4898, avg=4898.00, stdev= 0.00, samples=1 00:12:32.345 lat (usec) : 100=85.45%, 250=14.54%, 500=0.01% 00:12:32.345 cpu : usr=7.40%, sys=12.30%, ctx=9797, majf=0, minf=1 00:12:32.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 issued rwts: total=4677,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.345 job2: (groupid=0, jobs=1): err= 0: pid=2601363: Thu Jul 25 07:18:04 2024 00:12:32.345 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:32.345 slat (nsec): min=8337, max=23690, avg=9117.55, stdev=809.78 00:12:32.345 clat (usec): min=72, max=195, avg=127.10, stdev=12.87 00:12:32.345 lat (usec): min=81, max=204, avg=136.22, stdev=12.88 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 89], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 119], 00:12:32.345 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:12:32.345 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:12:32.345 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:12:32.345 | 99.99th=[ 196] 00:12:32.345 write: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:12:32.345 slat (nsec): min=10249, max=38759, avg=11223.00, stdev=1137.30 00:12:32.345 clat (usec): min=72, max=191, avg=119.58, stdev=13.61 00:12:32.345 lat (usec): min=84, max=202, avg=130.80, stdev=13.68 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 87], 5.00th=[ 100], 10.00th=[ 105], 20.00th=[ 111], 00:12:32.345 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:12:32.345 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 143], 00:12:32.345 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:12:32.345 | 99.99th=[ 192] 00:12:32.345 bw ( KiB/s): min=16384, max=16384, per=23.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:32.345 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:32.345 lat (usec) : 100=3.37%, 250=96.63% 00:12:32.345 cpu : usr=5.40%, sys=10.20%, ctx=7368, majf=0, minf=2 00:12:32.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 issued rwts: total=3584,3784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.345 job3: (groupid=0, jobs=1): err= 0: pid=2601370: Thu Jul 25 07:18:04 2024 00:12:32.345 read: IOPS=5070, BW=19.8MiB/s (20.8MB/s)(19.8MiB/1001msec) 00:12:32.345 slat (nsec): min=8408, max=18430, avg=9174.53, stdev=711.79 00:12:32.345 clat (usec): min=73, max=166, avg=87.79, stdev= 6.67 00:12:32.345 lat (usec): min=82, max=175, avg=96.96, stdev= 6.73 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:12:32.345 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:12:32.345 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 100], 00:12:32.345 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 130], 99.95th=[ 137], 00:12:32.345 | 99.99th=[ 167] 00:12:32.345 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:32.345 slat (nsec): min=10277, max=45351, avg=11005.97, stdev=1246.81 00:12:32.345 clat (usec): min=64, max=263, avg=84.18, stdev= 7.23 00:12:32.345 lat (usec): min=79, max=273, avg=95.18, stdev= 7.35 00:12:32.345 clat percentiles (usec): 00:12:32.345 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:12:32.345 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 85], 00:12:32.345 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 96], 00:12:32.345 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 127], 99.95th=[ 145], 00:12:32.345 | 99.99th=[ 265] 00:12:32.345 bw ( KiB/s): min=20576, max=20576, per=28.92%, avg=20576.00, stdev= 0.00, samples=1 00:12:32.345 iops : min= 5144, max= 5144, avg=5144.00, stdev= 0.00, samples=1 00:12:32.345 lat (usec) : 100=96.53%, 250=3.46%, 500=0.01% 00:12:32.345 cpu : usr=6.90%, sys=14.00%, ctx=10196, majf=0, minf=1 00:12:32.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.345 issued rwts: total=5076,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.345 00:12:32.345 Run status group 0 (all jobs): 00:12:32.345 READ: bw=66.0MiB/s (69.2MB/s), 14.0MiB/s-19.8MiB/s (14.7MB/s-20.8MB/s), io=66.1MiB (69.3MB), run=1001-1001msec 00:12:32.345 WRITE: bw=69.5MiB/s (72.9MB/s), 14.8MiB/s-20.0MiB/s (15.5MB/s-20.9MB/s), io=69.5MiB (72.9MB), run=1001-1001msec 00:12:32.345 00:12:32.345 Disk stats (read/write): 00:12:32.345 nvme0n1: ios=3057/3072, merge=0/0, ticks=378/342, in_queue=720, util=84.25% 00:12:32.345 nvme0n2: ios=3899/4096, merge=0/0, ticks=320/342, in_queue=662, util=85.07% 00:12:32.345 nvme0n3: ios=3012/3072, merge=0/0, ticks=358/345, in_queue=703, util=88.42% 00:12:32.345 nvme0n4: ios=4096/4378, merge=0/0, ticks=333/331, in_queue=664, util=89.46% 00:12:32.345 07:18:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:32.345 [global] 00:12:32.345 thread=1 00:12:32.345 invalidate=1 00:12:32.345 rw=randwrite 00:12:32.345 time_based=1 00:12:32.345 runtime=1 00:12:32.345 ioengine=libaio 00:12:32.345 direct=1 00:12:32.345 bs=4096 00:12:32.345 iodepth=1 00:12:32.345 norandommap=0 00:12:32.345 numjobs=1 00:12:32.345 00:12:32.345 verify_dump=1 00:12:32.345 verify_backlog=512 00:12:32.345 verify_state_save=0 00:12:32.345 do_verify=1 00:12:32.345 verify=crc32c-intel 00:12:32.345 [job0] 00:12:32.345 filename=/dev/nvme0n1 00:12:32.345 [job1] 00:12:32.346 filename=/dev/nvme0n2 00:12:32.346 [job2] 00:12:32.346 filename=/dev/nvme0n3 00:12:32.346 [job3] 00:12:32.346 filename=/dev/nvme0n4 00:12:32.346 Could not set queue depth (nvme0n1) 00:12:32.346 Could not set queue depth (nvme0n2) 00:12:32.346 Could not set queue depth (nvme0n3) 00:12:32.346 Could not set queue depth (nvme0n4) 00:12:32.611 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.611 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.611 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.611 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.611 fio-3.35 00:12:32.611 Starting 4 threads 00:12:34.013 00:12:34.013 job0: (groupid=0, jobs=1): err= 0: pid=2601784: Thu Jul 25 07:18:06 2024 00:12:34.013 read: IOPS=3603, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:12:34.013 slat (nsec): min=8533, max=38025, avg=10828.88, stdev=3075.46 00:12:34.013 clat (usec): min=86, max=169, avg=118.07, stdev= 7.39 00:12:34.013 lat (usec): min=95, max=187, avg=128.90, stdev= 7.08 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 113], 00:12:34.013 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:12:34.013 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 127], 95.00th=[ 130], 00:12:34.013 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 145], 99.95th=[ 155], 00:12:34.013 | 99.99th=[ 169] 00:12:34.013 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:34.013 slat (nsec): min=9913, max=51882, avg=13147.31, stdev=3973.86 00:12:34.013 clat (usec): min=66, max=188, avg=112.45, stdev=12.17 00:12:34.013 lat (usec): min=77, max=200, avg=125.60, stdev=11.45 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 86], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 105], 00:12:34.013 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:12:34.013 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 135], 00:12:34.013 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 184], 00:12:34.013 | 99.99th=[ 190] 00:12:34.013 bw ( KiB/s): min=16384, max=16384, per=25.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:34.013 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:34.013 lat (usec) : 100=5.28%, 250=94.72% 00:12:34.013 cpu : usr=4.30%, sys=12.20%, ctx=7704, majf=0, minf=1 00:12:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 issued rwts: total=3607,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.013 job1: (groupid=0, jobs=1): err= 0: pid=2601801: Thu Jul 25 07:18:06 2024 00:12:34.013 read: IOPS=3603, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:12:34.013 slat (nsec): min=8187, max=26096, avg=9016.17, stdev=796.27 00:12:34.013 clat (usec): min=77, max=171, avg=121.37, stdev= 6.72 00:12:34.013 lat (usec): min=86, max=180, avg=130.38, stdev= 6.69 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 117], 00:12:34.013 | 30.00th=[ 119], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 123], 00:12:34.013 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 00:12:34.013 | 99.00th=[ 139], 99.50th=[ 141], 99.90th=[ 165], 99.95th=[ 165], 00:12:34.013 | 99.99th=[ 172] 00:12:34.013 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:34.013 slat (nsec): min=9699, max=37563, avg=10518.94, stdev=1071.68 00:12:34.013 clat (usec): min=63, max=187, avg=114.77, stdev=11.67 00:12:34.013 lat (usec): min=74, max=199, avg=125.29, stdev=11.78 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 80], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 109], 00:12:34.013 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 116], 00:12:34.013 | 70.00th=[ 118], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 137], 00:12:34.013 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 186], 00:12:34.013 | 99.99th=[ 188] 00:12:34.013 bw ( KiB/s): min=16384, max=16384, per=25.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:34.013 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:34.013 lat (usec) : 100=2.05%, 250=97.95% 00:12:34.013 cpu : usr=6.10%, sys=9.80%, ctx=7703, majf=0, minf=1 00:12:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 issued rwts: total=3607,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.013 job2: (groupid=0, jobs=1): err= 0: pid=2601828: Thu Jul 25 07:18:06 2024 00:12:34.013 read: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1001msec) 00:12:34.013 slat (nsec): min=8365, max=20894, avg=9104.34, stdev=821.58 00:12:34.013 clat (usec): min=74, max=160, avg=118.12, stdev= 8.63 00:12:34.013 lat (usec): min=83, max=169, avg=127.22, stdev= 8.63 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 84], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:12:34.013 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:12:34.013 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 131], 00:12:34.013 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 147], 99.95th=[ 153], 00:12:34.013 | 99.99th=[ 161] 00:12:34.013 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:34.013 slat (nsec): min=10066, max=45735, avg=10791.57, stdev=1333.65 00:12:34.013 clat (usec): min=70, max=152, avg=110.53, stdev=12.41 00:12:34.013 lat (usec): min=81, max=163, avg=121.33, stdev=12.41 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 86], 20.00th=[ 108], 00:12:34.013 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 116], 00:12:34.013 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 125], 00:12:34.013 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 145], 99.95th=[ 149], 00:12:34.013 | 99.99th=[ 153] 00:12:34.013 bw ( KiB/s): min=16384, max=16384, per=25.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:34.013 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:34.013 lat (usec) : 100=8.08%, 250=91.92% 00:12:34.013 cpu : usr=6.50%, sys=9.80%, ctx=7920, majf=0, minf=2 00:12:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.013 issued rwts: total=3823,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.013 job3: (groupid=0, jobs=1): err= 0: pid=2601838: Thu Jul 25 07:18:06 2024 00:12:34.013 read: IOPS=3601, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:12:34.013 slat (nsec): min=8309, max=32127, avg=9103.30, stdev=856.13 00:12:34.013 clat (usec): min=85, max=168, avg=121.29, stdev= 6.21 00:12:34.013 lat (usec): min=94, max=177, avg=130.39, stdev= 6.23 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 117], 00:12:34.013 | 30.00th=[ 119], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 123], 00:12:34.013 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 129], 95.00th=[ 131], 00:12:34.013 | 99.00th=[ 137], 99.50th=[ 139], 99.90th=[ 157], 99.95th=[ 161], 00:12:34.013 | 99.99th=[ 169] 00:12:34.013 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:34.013 slat (nsec): min=10020, max=43290, avg=10839.68, stdev=1025.63 00:12:34.013 clat (usec): min=70, max=185, avg=114.52, stdev=11.00 00:12:34.013 lat (usec): min=81, max=197, avg=125.36, stdev=11.12 00:12:34.013 clat percentiles (usec): 00:12:34.013 | 1.00th=[ 87], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 109], 00:12:34.013 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 115], 00:12:34.013 | 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 137], 00:12:34.013 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:12:34.013 | 99.99th=[ 186] 00:12:34.014 bw ( KiB/s): min=16384, max=16384, per=25.03%, avg=16384.00, stdev= 0.00, samples=1 00:12:34.014 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:34.014 lat (usec) : 100=1.40%, 250=98.60% 00:12:34.014 cpu : usr=4.70%, sys=11.40%, ctx=7701, majf=0, minf=1 00:12:34.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.014 issued rwts: total=3605,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.014 00:12:34.014 Run status group 0 (all jobs): 00:12:34.014 READ: bw=57.1MiB/s (59.9MB/s), 14.1MiB/s-14.9MiB/s (14.8MB/s-15.6MB/s), io=57.2MiB (60.0MB), run=1001-1001msec 00:12:34.014 WRITE: bw=63.9MiB/s (67.0MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=64.0MiB (67.1MB), run=1001-1001msec 00:12:34.014 00:12:34.014 Disk stats (read/write): 00:12:34.014 nvme0n1: ios=3121/3117, merge=0/0, ticks=339/328, in_queue=667, util=81.75% 00:12:34.014 nvme0n2: ios=3072/3119, merge=0/0, ticks=349/328, in_queue=677, util=82.96% 00:12:34.014 nvme0n3: ios=3072/3334, merge=0/0, ticks=342/342, in_queue=684, util=87.54% 00:12:34.014 nvme0n4: ios=3072/3118, merge=0/0, ticks=357/344, in_queue=701, util=89.18% 00:12:34.014 07:18:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:34.014 [global] 00:12:34.014 thread=1 00:12:34.014 invalidate=1 00:12:34.014 rw=write 00:12:34.014 time_based=1 00:12:34.014 runtime=1 00:12:34.014 ioengine=libaio 00:12:34.014 direct=1 00:12:34.014 bs=4096 00:12:34.014 iodepth=128 00:12:34.014 norandommap=0 00:12:34.014 numjobs=1 00:12:34.014 00:12:34.014 verify_dump=1 00:12:34.014 verify_backlog=512 00:12:34.014 verify_state_save=0 00:12:34.014 do_verify=1 00:12:34.014 verify=crc32c-intel 00:12:34.014 [job0] 00:12:34.014 filename=/dev/nvme0n1 00:12:34.014 [job1] 00:12:34.014 filename=/dev/nvme0n2 00:12:34.014 [job2] 00:12:34.014 filename=/dev/nvme0n3 00:12:34.014 [job3] 00:12:34.014 filename=/dev/nvme0n4 00:12:34.014 Could not set queue depth (nvme0n1) 00:12:34.014 Could not set queue depth (nvme0n2) 00:12:34.014 Could not set queue depth (nvme0n3) 00:12:34.014 Could not set queue depth (nvme0n4) 00:12:34.281 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.281 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.281 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.281 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.282 fio-3.35 00:12:34.282 Starting 4 threads 00:12:35.675 00:12:35.675 job0: (groupid=0, jobs=1): err= 0: pid=2602241: Thu Jul 25 07:18:07 2024 00:12:35.675 read: IOPS=9926, BW=38.8MiB/s (40.7MB/s)(38.9MiB/1003msec) 00:12:35.675 slat (usec): min=2, max=1695, avg=48.86, stdev=175.61 00:12:35.675 clat (usec): min=1867, max=8519, avg=6486.98, stdev=346.74 00:12:35.675 lat (usec): min=2574, max=8521, avg=6535.84, stdev=333.22 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6325], 00:12:35.675 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6587], 00:12:35.675 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6783], 95.00th=[ 6915], 00:12:35.675 | 99.00th=[ 7046], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 8455], 00:12:35.675 | 99.99th=[ 8455] 00:12:35.675 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(40.0MiB/1003msec); 0 zone resets 00:12:35.675 slat (usec): min=2, max=1693, avg=46.27, stdev=165.35 00:12:35.675 clat (usec): min=4317, max=6944, avg=6107.70, stdev=264.43 00:12:35.675 lat (usec): min=4327, max=7412, avg=6153.97, stdev=251.81 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 5997], 00:12:35.675 | 30.00th=[ 6063], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6194], 00:12:35.675 | 70.00th=[ 6259], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6456], 00:12:35.675 | 99.00th=[ 6587], 99.50th=[ 6652], 99.90th=[ 6718], 99.95th=[ 6783], 00:12:35.675 | 99.99th=[ 6915] 00:12:35.675 bw ( KiB/s): min=40960, max=40960, per=36.78%, avg=40960.00, stdev= 0.00, samples=2 00:12:35.675 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=2 00:12:35.675 lat (msec) : 2=0.01%, 4=0.16%, 10=99.83% 00:12:35.675 cpu : usr=4.39%, sys=9.28%, ctx=1286, majf=0, minf=1 00:12:35.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.675 issued rwts: total=9956,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.675 job1: (groupid=0, jobs=1): err= 0: pid=2602255: Thu Jul 25 07:18:07 2024 00:12:35.675 read: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec) 00:12:35.675 slat (usec): min=2, max=2169, avg=52.91, stdev=191.24 00:12:35.675 clat (usec): min=5666, max=10970, avg=7068.74, stdev=499.07 00:12:35.675 lat (usec): min=5717, max=10973, avg=7121.65, stdev=499.17 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6652], 00:12:35.675 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:12:35.675 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7832], 00:12:35.675 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:35.675 | 99.99th=[10945] 00:12:35.675 write: IOPS=9295, BW=36.3MiB/s (38.1MB/s)(36.5MiB/1005msec); 0 zone resets 00:12:35.675 slat (usec): min=2, max=1887, avg=50.78, stdev=183.18 00:12:35.675 clat (usec): min=3020, max=10962, avg=6676.57, stdev=641.43 00:12:35.675 lat (usec): min=3037, max=10968, avg=6727.35, stdev=644.42 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 5080], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6259], 00:12:35.675 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6849], 00:12:35.675 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7570], 00:12:35.675 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[10290], 99.95th=[10945], 00:12:35.675 | 99.99th=[10945] 00:12:35.675 bw ( KiB/s): min=36864, max=36864, per=33.10%, avg=36864.00, stdev= 0.00, samples=2 00:12:35.675 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:12:35.675 lat (msec) : 4=0.19%, 10=99.66%, 20=0.15% 00:12:35.675 cpu : usr=4.88%, sys=8.37%, ctx=1201, majf=0, minf=1 00:12:35.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.675 issued rwts: total=9216,9342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.675 job2: (groupid=0, jobs=1): err= 0: pid=2602264: Thu Jul 25 07:18:07 2024 00:12:35.675 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:12:35.675 slat (usec): min=2, max=1164, avg=121.97, stdev=309.18 00:12:35.675 clat (usec): min=14053, max=19104, avg=15773.19, stdev=529.18 00:12:35.675 lat (usec): min=14065, max=19107, avg=15895.15, stdev=522.94 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[14353], 5.00th=[14877], 10.00th=[15139], 20.00th=[15270], 00:12:35.675 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[15926], 00:12:35.675 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16319], 95.00th=[16450], 00:12:35.675 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:12:35.675 | 99.99th=[19006] 00:12:35.675 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec); 0 zone resets 00:12:35.675 slat (usec): min=2, max=1957, avg=114.46, stdev=291.81 00:12:35.675 clat (usec): min=3520, max=16890, avg=14839.05, stdev=1117.94 00:12:35.675 lat (usec): min=4483, max=16895, avg=14953.52, stdev=1115.73 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 9110], 5.00th=[13829], 10.00th=[14222], 20.00th=[14353], 00:12:35.675 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:12:35.675 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[15926], 00:12:35.675 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16712], 99.95th=[16712], 00:12:35.675 | 99.99th=[16909] 00:12:35.675 bw ( KiB/s): min=16384, max=16384, per=14.71%, avg=16384.00, stdev= 0.00, samples=2 00:12:35.675 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:12:35.675 lat (msec) : 4=0.01%, 10=0.59%, 20=99.40% 00:12:35.675 cpu : usr=2.89%, sys=4.49%, ctx=1287, majf=0, minf=1 00:12:35.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.675 issued rwts: total=4096,4199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.675 job3: (groupid=0, jobs=1): err= 0: pid=2602265: Thu Jul 25 07:18:07 2024 00:12:35.675 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:12:35.675 slat (usec): min=2, max=1169, avg=121.77, stdev=309.09 00:12:35.675 clat (usec): min=14044, max=20058, avg=15808.96, stdev=587.92 00:12:35.675 lat (usec): min=14246, max=20062, avg=15930.73, stdev=578.96 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[14484], 5.00th=[14746], 10.00th=[15008], 20.00th=[15401], 00:12:35.675 | 30.00th=[15533], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:12:35.675 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16319], 95.00th=[16581], 00:12:35.675 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:12:35.675 | 99.99th=[20055] 00:12:35.675 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1005msec); 0 zone resets 00:12:35.675 slat (usec): min=2, max=1959, avg=114.65, stdev=292.84 00:12:35.675 clat (usec): min=3522, max=16893, avg=14783.49, stdev=1105.65 00:12:35.675 lat (usec): min=4483, max=16903, avg=14898.14, stdev=1103.86 00:12:35.675 clat percentiles (usec): 00:12:35.675 | 1.00th=[ 9110], 5.00th=[13698], 10.00th=[14091], 20.00th=[14353], 00:12:35.675 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:12:35.675 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15795], 00:12:35.675 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:12:35.675 | 99.99th=[16909] 00:12:35.675 bw ( KiB/s): min=16208, max=16560, per=14.71%, avg=16384.00, stdev=248.90, samples=2 00:12:35.675 iops : min= 4052, max= 4140, avg=4096.00, stdev=62.23, samples=2 00:12:35.675 lat (msec) : 4=0.01%, 10=0.55%, 20=99.42%, 50=0.01% 00:12:35.675 cpu : usr=1.49%, sys=5.58%, ctx=1283, majf=0, minf=1 00:12:35.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.675 issued rwts: total=4096,4203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.675 00:12:35.675 Run status group 0 (all jobs): 00:12:35.675 READ: bw=106MiB/s (112MB/s), 15.9MiB/s-38.8MiB/s (16.7MB/s-40.7MB/s), io=107MiB (112MB), run=1003-1005msec 00:12:35.675 WRITE: bw=109MiB/s (114MB/s), 16.3MiB/s-39.9MiB/s (17.1MB/s-41.8MB/s), io=109MiB (115MB), run=1003-1005msec 00:12:35.675 00:12:35.675 Disk stats (read/write): 00:12:35.675 nvme0n1: ios=8241/8605, merge=0/0, ticks=25742/25171, in_queue=50913, util=84.17% 00:12:35.675 nvme0n2: ios=7487/7680, merge=0/0, ticks=52527/50330, in_queue=102857, util=85.09% 00:12:35.675 nvme0n3: ios=3306/3584, merge=0/0, ticks=16929/17356, in_queue=34285, util=88.34% 00:12:35.675 nvme0n4: ios=3330/3584, merge=0/0, ticks=16991/17355, in_queue=34346, util=89.38% 00:12:35.675 07:18:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:35.675 [global] 00:12:35.675 thread=1 00:12:35.675 invalidate=1 00:12:35.675 rw=randwrite 00:12:35.675 time_based=1 00:12:35.676 runtime=1 00:12:35.676 ioengine=libaio 00:12:35.676 direct=1 00:12:35.676 bs=4096 00:12:35.676 iodepth=128 00:12:35.676 norandommap=0 00:12:35.676 numjobs=1 00:12:35.676 00:12:35.676 verify_dump=1 00:12:35.676 verify_backlog=512 00:12:35.676 verify_state_save=0 00:12:35.676 do_verify=1 00:12:35.676 verify=crc32c-intel 00:12:35.676 [job0] 00:12:35.676 filename=/dev/nvme0n1 00:12:35.676 [job1] 00:12:35.676 filename=/dev/nvme0n2 00:12:35.676 [job2] 00:12:35.676 filename=/dev/nvme0n3 00:12:35.676 [job3] 00:12:35.676 filename=/dev/nvme0n4 00:12:35.676 Could not set queue depth (nvme0n1) 00:12:35.676 Could not set queue depth (nvme0n2) 00:12:35.676 Could not set queue depth (nvme0n3) 00:12:35.676 Could not set queue depth (nvme0n4) 00:12:35.934 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.934 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.934 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.934 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.934 fio-3.35 00:12:35.934 Starting 4 threads 00:12:37.313 00:12:37.313 job0: (groupid=0, jobs=1): err= 0: pid=2602861: Thu Jul 25 07:18:09 2024 00:12:37.313 read: IOPS=8640, BW=33.8MiB/s (35.4MB/s)(33.9MiB/1003msec) 00:12:37.313 slat (usec): min=2, max=1740, avg=57.80, stdev=212.98 00:12:37.313 clat (usec): min=1749, max=9784, avg=7532.50, stdev=576.51 00:12:37.313 lat (usec): min=2787, max=9786, avg=7590.30, stdev=542.95 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 6980], 00:12:37.313 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 7767], 60.00th=[ 7832], 00:12:37.313 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8029], 95.00th=[ 8029], 00:12:37.313 | 99.00th=[ 8160], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 9765], 00:12:37.313 | 99.99th=[ 9765] 00:12:37.313 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:12:37.313 slat (usec): min=2, max=1637, avg=54.51, stdev=199.56 00:12:37.313 clat (usec): min=3910, max=7846, avg=7106.61, stdev=524.45 00:12:37.313 lat (usec): min=3957, max=7849, avg=7161.12, stdev=494.61 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6521], 00:12:37.313 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7373], 00:12:37.313 | 70.00th=[ 7439], 80.00th=[ 7504], 90.00th=[ 7570], 95.00th=[ 7635], 00:12:37.313 | 99.00th=[ 7767], 99.50th=[ 7832], 99.90th=[ 7832], 99.95th=[ 7832], 00:12:37.313 | 99.99th=[ 7832] 00:12:37.313 bw ( KiB/s): min=32768, max=36864, per=28.51%, avg=34816.00, stdev=2896.31, samples=2 00:12:37.313 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:12:37.313 lat (msec) : 2=0.01%, 4=0.19%, 10=99.80% 00:12:37.313 cpu : usr=3.79%, sys=4.99%, ctx=1104, majf=0, minf=1 00:12:37.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:37.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.313 issued rwts: total=8666,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.313 job1: (groupid=0, jobs=1): err= 0: pid=2602862: Thu Jul 25 07:18:09 2024 00:12:37.313 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:12:37.313 slat (usec): min=2, max=3169, avg=59.91, stdev=228.13 00:12:37.313 clat (usec): min=5422, max=17377, avg=7814.04, stdev=1282.05 00:12:37.313 lat (usec): min=6332, max=17389, avg=7873.95, stdev=1274.23 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:12:37.313 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7767], 60.00th=[ 7767], 00:12:37.313 | 70.00th=[ 7832], 80.00th=[ 7898], 90.00th=[ 8029], 95.00th=[ 8291], 00:12:37.313 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15401], 99.95th=[16712], 00:12:37.313 | 99.99th=[17433] 00:12:37.313 write: IOPS=8330, BW=32.5MiB/s (34.1MB/s)(32.6MiB/1002msec); 0 zone resets 00:12:37.313 slat (usec): min=2, max=2946, avg=57.96, stdev=225.74 00:12:37.313 clat (usec): min=1312, max=17308, avg=7537.94, stdev=1772.89 00:12:37.313 lat (usec): min=1764, max=17319, avg=7595.90, stdev=1777.54 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 6390], 20.00th=[ 6652], 00:12:37.313 | 30.00th=[ 7177], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7373], 00:12:37.313 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7701], 95.00th=[13566], 00:12:37.313 | 99.00th=[14615], 99.50th=[14746], 99.90th=[16712], 99.95th=[16909], 00:12:37.313 | 99.99th=[17433] 00:12:37.313 bw ( KiB/s): min=32768, max=32992, per=26.92%, avg=32880.00, stdev=158.39, samples=2 00:12:37.313 iops : min= 8192, max= 8248, avg=8220.00, stdev=39.60, samples=2 00:12:37.313 lat (msec) : 2=0.11%, 4=0.21%, 10=94.85%, 20=4.82% 00:12:37.313 cpu : usr=3.30%, sys=5.49%, ctx=1078, majf=0, minf=1 00:12:37.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:37.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.313 issued rwts: total=8192,8347,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.313 job2: (groupid=0, jobs=1): err= 0: pid=2602868: Thu Jul 25 07:18:09 2024 00:12:37.313 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:12:37.313 slat (usec): min=2, max=1389, avg=73.43, stdev=255.11 00:12:37.313 clat (usec): min=6528, max=15867, avg=9567.90, stdev=1788.66 00:12:37.313 lat (usec): min=6536, max=15870, avg=9641.33, stdev=1792.94 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8717], 00:12:37.313 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:12:37.313 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[15533], 00:12:37.313 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:12:37.313 | 99.99th=[15926] 00:12:37.313 write: IOPS=6901, BW=27.0MiB/s (28.3MB/s)(27.0MiB/1002msec); 0 zone resets 00:12:37.313 slat (usec): min=2, max=1423, avg=70.72, stdev=241.25 00:12:37.313 clat (usec): min=827, max=15439, avg=9095.00, stdev=2002.60 00:12:37.313 lat (usec): min=1840, max=15473, avg=9165.71, stdev=2008.85 00:12:37.313 clat percentiles (usec): 00:12:37.313 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 8160], 00:12:37.313 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:12:37.313 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[11338], 95.00th=[14746], 00:12:37.313 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15401], 99.95th=[15401], 00:12:37.313 | 99.99th=[15401] 00:12:37.313 bw ( KiB/s): min=26096, max=28208, per=22.23%, avg=27152.00, stdev=1493.41, samples=2 00:12:37.314 iops : min= 6524, max= 7052, avg=6788.00, stdev=373.35, samples=2 00:12:37.314 lat (usec) : 1000=0.01% 00:12:37.314 lat (msec) : 2=0.13%, 4=0.13%, 10=88.96%, 20=10.77% 00:12:37.314 cpu : usr=2.60%, sys=5.29%, ctx=1056, majf=0, minf=1 00:12:37.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:37.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.314 issued rwts: total=6656,6915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.314 job3: (groupid=0, jobs=1): err= 0: pid=2602870: Thu Jul 25 07:18:09 2024 00:12:37.314 read: IOPS=6248, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1002msec) 00:12:37.314 slat (usec): min=2, max=2971, avg=78.68, stdev=280.72 00:12:37.314 clat (usec): min=641, max=17420, avg=10026.83, stdev=2265.29 00:12:37.314 lat (usec): min=1866, max=17431, avg=10105.51, stdev=2284.25 00:12:37.314 clat percentiles (usec): 00:12:37.314 | 1.00th=[ 5211], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8717], 00:12:37.314 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:12:37.314 | 70.00th=[10028], 80.00th=[10552], 90.00th=[14484], 95.00th=[15533], 00:12:37.314 | 99.00th=[15795], 99.50th=[15795], 99.90th=[17171], 99.95th=[17171], 00:12:37.314 | 99.99th=[17433] 00:12:37.314 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:12:37.314 slat (usec): min=2, max=2172, avg=73.03, stdev=250.90 00:12:37.314 clat (usec): min=7308, max=16563, avg=9583.47, stdev=2128.62 00:12:37.314 lat (usec): min=7311, max=16572, avg=9656.50, stdev=2146.96 00:12:37.314 clat percentiles (usec): 00:12:37.314 | 1.00th=[ 7439], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 8291], 00:12:37.314 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9110], 00:12:37.314 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[14222], 95.00th=[14877], 00:12:37.314 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15664], 99.95th=[15926], 00:12:37.314 | 99.99th=[16581] 00:12:37.314 bw ( KiB/s): min=24576, max=28592, per=21.77%, avg=26584.00, stdev=2839.74, samples=2 00:12:37.314 iops : min= 6144, max= 7148, avg=6646.00, stdev=709.94, samples=2 00:12:37.314 lat (usec) : 750=0.01% 00:12:37.314 lat (msec) : 2=0.13%, 4=0.12%, 10=74.43%, 20=25.32% 00:12:37.314 cpu : usr=3.20%, sys=4.50%, ctx=1061, majf=0, minf=1 00:12:37.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:37.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.314 issued rwts: total=6261,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.314 00:12:37.314 Run status group 0 (all jobs): 00:12:37.314 READ: bw=116MiB/s (122MB/s), 24.4MiB/s-33.8MiB/s (25.6MB/s-35.4MB/s), io=116MiB (122MB), run=1002-1003msec 00:12:37.314 WRITE: bw=119MiB/s (125MB/s), 25.9MiB/s-33.9MiB/s (27.2MB/s-35.5MB/s), io=120MiB (125MB), run=1002-1003msec 00:12:37.314 00:12:37.314 Disk stats (read/write): 00:12:37.314 nvme0n1: ios=7217/7313, merge=0/0, ticks=26343/25100, in_queue=51443, util=84.34% 00:12:37.314 nvme0n2: ios=6656/6971, merge=0/0, ticks=17036/17245, in_queue=34281, util=85.09% 00:12:37.314 nvme0n3: ios=5514/5632, merge=0/0, ticks=13164/12874, in_queue=26038, util=88.34% 00:12:37.314 nvme0n4: ios=5120/5407, merge=0/0, ticks=13690/13249, in_queue=26939, util=89.47% 00:12:37.314 07:18:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:37.314 07:18:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2603195 00:12:37.314 07:18:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:37.314 07:18:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:37.314 [global] 00:12:37.314 thread=1 00:12:37.314 invalidate=1 00:12:37.314 rw=read 00:12:37.314 time_based=1 00:12:37.314 runtime=10 00:12:37.314 ioengine=libaio 00:12:37.314 direct=1 00:12:37.314 bs=4096 00:12:37.314 iodepth=1 00:12:37.314 norandommap=1 00:12:37.314 numjobs=1 00:12:37.314 00:12:37.314 [job0] 00:12:37.314 filename=/dev/nvme0n1 00:12:37.314 [job1] 00:12:37.314 filename=/dev/nvme0n2 00:12:37.314 [job2] 00:12:37.314 filename=/dev/nvme0n3 00:12:37.314 [job3] 00:12:37.314 filename=/dev/nvme0n4 00:12:37.314 Could not set queue depth (nvme0n1) 00:12:37.314 Could not set queue depth (nvme0n2) 00:12:37.314 Could not set queue depth (nvme0n3) 00:12:37.314 Could not set queue depth (nvme0n4) 00:12:37.574 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.574 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.574 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.574 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.574 fio-3.35 00:12:37.574 Starting 4 threads 00:12:40.118 07:18:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:40.376 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=77193216, buflen=4096 00:12:40.376 fio: pid=2603534, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.376 07:18:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:40.376 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=91136000, buflen=4096 00:12:40.376 fio: pid=2603530, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.376 07:18:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.376 07:18:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:40.635 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28049408, buflen=4096 00:12:40.635 fio: pid=2603525, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.635 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.635 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:40.893 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=35106816, buflen=4096 00:12:40.893 fio: pid=2603527, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.893 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.893 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:40.893 00:12:40.893 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2603525: Thu Jul 25 07:18:13 2024 00:12:40.893 read: IOPS=7736, BW=30.2MiB/s (31.7MB/s)(90.8MiB/3003msec) 00:12:40.893 slat (usec): min=5, max=16974, avg=10.74, stdev=155.29 00:12:40.893 clat (usec): min=48, max=627, avg=116.91, stdev=25.35 00:12:40.893 lat (usec): min=59, max=17053, avg=127.66, stdev=156.85 00:12:40.893 clat percentiles (usec): 00:12:40.893 | 1.00th=[ 58], 5.00th=[ 71], 10.00th=[ 76], 20.00th=[ 87], 00:12:40.893 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 126], 00:12:40.893 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 153], 00:12:40.893 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 204], 99.95th=[ 217], 00:12:40.893 | 99.99th=[ 433] 00:12:40.893 bw ( KiB/s): min=26440, max=29888, per=26.20%, avg=29057.60, stdev=1479.16, samples=5 00:12:40.893 iops : min= 6610, max= 7472, avg=7264.40, stdev=369.79, samples=5 00:12:40.893 lat (usec) : 50=0.01%, 100=21.21%, 250=78.74%, 500=0.03%, 750=0.01% 00:12:40.893 cpu : usr=3.53%, sys=10.89%, ctx=23239, majf=0, minf=1 00:12:40.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.893 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.893 issued rwts: total=23233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.893 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2603527: Thu Jul 25 07:18:13 2024 00:12:40.893 read: IOPS=7750, BW=30.3MiB/s (31.7MB/s)(97.5MiB/3220msec) 00:12:40.893 slat (usec): min=2, max=16962, avg=11.20, stdev=190.22 00:12:40.893 clat (usec): min=42, max=20579, avg=115.67, stdev=156.28 00:12:40.893 lat (usec): min=56, max=20588, avg=126.87, stdev=245.76 00:12:40.893 clat percentiles (usec): 00:12:40.893 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 90], 00:12:40.893 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:12:40.894 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 153], 00:12:40.894 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 208], 99.95th=[ 215], 00:12:40.894 | 99.99th=[ 8717] 00:12:40.894 bw ( KiB/s): min=27048, max=32915, per=27.24%, avg=30212.50, stdev=2012.63, samples=6 00:12:40.894 iops : min= 6762, max= 8228, avg=7553.00, stdev=502.96, samples=6 00:12:40.894 lat (usec) : 50=0.07%, 100=21.25%, 250=78.65%, 500=0.01%, 750=0.01% 00:12:40.894 lat (msec) : 10=0.01%, 20=0.01%, 50=0.01% 00:12:40.894 cpu : usr=2.67%, sys=9.79%, ctx=24966, majf=0, minf=1 00:12:40.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 issued rwts: total=24956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.894 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2603530: Thu Jul 25 07:18:13 2024 00:12:40.894 read: IOPS=7884, BW=30.8MiB/s (32.3MB/s)(86.9MiB/2822msec) 00:12:40.894 slat (usec): min=2, max=11840, avg= 8.60, stdev=99.31 00:12:40.894 clat (usec): min=52, max=26067, avg=116.16, stdev=175.21 00:12:40.894 lat (usec): min=69, max=26076, avg=124.76, stdev=201.33 00:12:40.894 clat percentiles (usec): 00:12:40.894 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 93], 00:12:40.894 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:12:40.894 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 135], 00:12:40.894 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 206], 00:12:40.894 | 99.99th=[ 848] 00:12:40.894 bw ( KiB/s): min=29856, max=39752, per=29.11%, avg=32283.00, stdev=4252.10, samples=5 00:12:40.894 iops : min= 7464, max= 9938, avg=8070.60, stdev=1063.05, samples=5 00:12:40.894 lat (usec) : 100=21.73%, 250=78.23%, 500=0.01%, 1000=0.01% 00:12:40.894 lat (msec) : 2=0.01%, 50=0.01% 00:12:40.894 cpu : usr=2.98%, sys=8.51%, ctx=22256, majf=0, minf=1 00:12:40.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 issued rwts: total=22251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.894 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2603534: Thu Jul 25 07:18:13 2024 00:12:40.894 read: IOPS=7152, BW=27.9MiB/s (29.3MB/s)(73.6MiB/2635msec) 00:12:40.894 slat (nsec): min=5253, max=51758, avg=9114.63, stdev=1175.98 00:12:40.894 clat (usec): min=70, max=605, avg=128.02, stdev=14.09 00:12:40.894 lat (usec): min=79, max=614, avg=137.13, stdev=14.18 00:12:40.894 clat percentiles (usec): 00:12:40.894 | 1.00th=[ 97], 5.00th=[ 116], 10.00th=[ 118], 20.00th=[ 121], 00:12:40.894 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 127], 00:12:40.894 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 147], 95.00th=[ 157], 00:12:40.894 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 206], 99.95th=[ 217], 00:12:40.894 | 99.99th=[ 416] 00:12:40.894 bw ( KiB/s): min=26456, max=29880, per=26.18%, avg=29035.20, stdev=1463.77, samples=5 00:12:40.894 iops : min= 6614, max= 7470, avg=7258.80, stdev=365.94, samples=5 00:12:40.894 lat (usec) : 100=1.17%, 250=98.78%, 500=0.04%, 750=0.01% 00:12:40.894 cpu : usr=3.38%, sys=10.29%, ctx=18847, majf=0, minf=2 00:12:40.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.894 issued rwts: total=18847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.894 00:12:40.894 Run status group 0 (all jobs): 00:12:40.894 READ: bw=108MiB/s (114MB/s), 27.9MiB/s-30.8MiB/s (29.3MB/s-32.3MB/s), io=349MiB (366MB), run=2635-3220msec 00:12:40.894 00:12:40.894 Disk stats (read/write): 00:12:40.894 nvme0n1: ios=21493/0, merge=0/0, ticks=2419/0, in_queue=2419, util=93.85% 00:12:40.894 nvme0n2: ios=23325/0, merge=0/0, ticks=2594/0, in_queue=2594, util=93.00% 00:12:40.894 nvme0n3: ios=20731/0, merge=0/0, ticks=2265/0, in_queue=2265, util=96.03% 00:12:40.894 nvme0n4: ios=18703/0, merge=0/0, ticks=2217/0, in_queue=2217, util=96.42% 00:12:41.152 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.152 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:41.410 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.410 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:41.410 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.410 07:18:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:41.668 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.668 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:41.927 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:41.927 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2603195 00:12:41.927 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:41.927 07:18:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:42.861 nvmf hotplug test: fio failed as expected 00:12:42.861 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:43.120 rmmod nvme_rdma 00:12:43.120 rmmod nvme_fabrics 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2599743 ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2599743 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2599743 ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2599743 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2599743 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2599743' 00:12:43.120 killing process with pid 2599743 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2599743 00:12:43.120 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2599743 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:43.379 00:12:43.379 real 0m28.543s 00:12:43.379 user 2m7.115s 00:12:43.379 sys 0m11.689s 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.379 ************************************ 00:12:43.379 END TEST nvmf_fio_target 00:12:43.379 ************************************ 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.379 ************************************ 00:12:43.379 START TEST nvmf_bdevio 00:12:43.379 ************************************ 00:12:43.379 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:43.638 * Looking for test storage... 00:12:43.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 07:18:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.638 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.639 07:18:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:51.754 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:51.754 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:51.754 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:51.754 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.754 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:51.755 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.755 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:51.755 altname enp217s0f0np0 00:12:51.755 altname ens818f0np0 00:12:51.755 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.755 valid_lft forever preferred_lft forever 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:51.755 07:18:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:51.755 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.755 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:51.755 altname enp217s0f1np1 00:12:51.755 altname ens818f1np1 00:12:51.755 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.755 valid_lft forever preferred_lft forever 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.755 192.168.100.9' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:51.755 192.168.100.9' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:51.755 192.168.100.9' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2608530 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2608530 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2608530 ']' 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.755 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.755 [2024-07-25 07:18:24.175260] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:51.755 [2024-07-25 07:18:24.175315] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.755 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.755 [2024-07-25 07:18:24.259176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.013 [2024-07-25 07:18:24.333741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.013 [2024-07-25 07:18:24.333778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.013 [2024-07-25 07:18:24.333788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.013 [2024-07-25 07:18:24.333797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.013 [2024-07-25 07:18:24.333804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.013 [2024-07-25 07:18:24.333919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:52.013 [2024-07-25 07:18:24.334031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:52.013 [2024-07-25 07:18:24.334138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.013 [2024-07-25 07:18:24.334140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:52.576 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.576 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:52.577 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.577 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.577 07:18:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.577 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.577 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:52.577 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.577 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.577 [2024-07-25 07:18:25.072143] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xec66d0/0xecabc0) succeed. 00:12:52.577 [2024-07-25 07:18:25.081582] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xec7d10/0xf0c250) succeed. 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.834 Malloc0 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.834 [2024-07-25 07:18:25.247050] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:52.834 { 00:12:52.834 "params": { 00:12:52.834 "name": "Nvme$subsystem", 00:12:52.834 "trtype": "$TEST_TRANSPORT", 00:12:52.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:52.834 "adrfam": "ipv4", 00:12:52.834 "trsvcid": "$NVMF_PORT", 00:12:52.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:52.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:52.834 "hdgst": ${hdgst:-false}, 00:12:52.834 "ddgst": ${ddgst:-false} 00:12:52.834 }, 00:12:52.834 "method": "bdev_nvme_attach_controller" 00:12:52.834 } 00:12:52.834 EOF 00:12:52.834 )") 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:52.834 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:52.834 "params": { 00:12:52.834 "name": "Nvme1", 00:12:52.834 "trtype": "rdma", 00:12:52.834 "traddr": "192.168.100.8", 00:12:52.834 "adrfam": "ipv4", 00:12:52.834 "trsvcid": "4420", 00:12:52.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:52.834 "hdgst": false, 00:12:52.834 "ddgst": false 00:12:52.834 }, 00:12:52.834 "method": "bdev_nvme_attach_controller" 00:12:52.834 }' 00:12:52.834 [2024-07-25 07:18:25.295619] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:52.834 [2024-07-25 07:18:25.295674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608718 ] 00:12:52.834 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.091 [2024-07-25 07:18:25.381218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.091 [2024-07-25 07:18:25.454411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.091 [2024-07-25 07:18:25.454505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.091 [2024-07-25 07:18:25.454507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.349 I/O targets: 00:12:53.349 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:53.349 00:12:53.349 00:12:53.349 CUnit - A unit testing framework for C - Version 2.1-3 00:12:53.349 http://cunit.sourceforge.net/ 00:12:53.349 00:12:53.349 00:12:53.349 Suite: bdevio tests on: Nvme1n1 00:12:53.349 Test: blockdev write read block ...passed 00:12:53.349 Test: blockdev write zeroes read block ...passed 00:12:53.349 Test: blockdev write zeroes read no split ...passed 00:12:53.349 Test: blockdev write zeroes read split ...passed 00:12:53.349 Test: blockdev write zeroes read split partial ...passed 00:12:53.349 Test: blockdev reset ...[2024-07-25 07:18:25.662613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:53.349 [2024-07-25 07:18:25.685263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:53.349 [2024-07-25 07:18:25.711896] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:53.349 passed 00:12:53.349 Test: blockdev write read 8 blocks ...passed 00:12:53.349 Test: blockdev write read size > 128k ...passed 00:12:53.349 Test: blockdev write read invalid size ...passed 00:12:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.349 Test: blockdev write read max offset ...passed 00:12:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.349 Test: blockdev writev readv 8 blocks ...passed 00:12:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.349 Test: blockdev writev readv block ...passed 00:12:53.349 Test: blockdev writev readv size > 128k ...passed 00:12:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.349 Test: blockdev comparev and writev ...[2024-07-25 07:18:25.714834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.714864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.714877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.714887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:53.349 [2024-07-25 07:18:25.715498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:53.349 passed 00:12:53.349 Test: blockdev nvme passthru rw ...passed 00:12:53.349 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:18:25.715769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:53.349 [2024-07-25 07:18:25.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:53.349 [2024-07-25 07:18:25.715839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:53.349 [2024-07-25 07:18:25.715891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:53.349 [2024-07-25 07:18:25.715940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:53.349 [2024-07-25 07:18:25.715951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:53.349 passed 00:12:53.349 Test: blockdev nvme admin passthru ...passed 00:12:53.349 Test: blockdev copy ...passed 00:12:53.349 00:12:53.349 Run Summary: Type Total Ran Passed Failed Inactive 00:12:53.349 suites 1 1 n/a 0 0 00:12:53.349 tests 23 23 23 0 0 00:12:53.349 asserts 152 152 152 0 n/a 00:12:53.349 00:12:53.349 Elapsed time = 0.170 seconds 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:53.607 rmmod nvme_rdma 00:12:53.607 rmmod nvme_fabrics 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2608530 ']' 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2608530 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2608530 ']' 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2608530 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.607 07:18:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2608530 00:12:53.607 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:53.607 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:53.607 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2608530' 00:12:53.607 killing process with pid 2608530 00:12:53.607 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2608530 00:12:53.607 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2608530 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:53.866 00:12:53.866 real 0m10.439s 00:12:53.866 user 0m11.119s 00:12:53.866 sys 0m6.878s 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 ************************************ 00:12:53.866 END TEST nvmf_bdevio 00:12:53.866 ************************************ 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:53.866 00:12:53.866 real 4m34.149s 00:12:53.866 user 11m5.042s 00:12:53.866 sys 1m51.817s 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.866 07:18:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 ************************************ 00:12:53.866 END TEST nvmf_target_core 00:12:53.866 ************************************ 00:12:53.866 07:18:26 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:53.866 07:18:26 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.866 07:18:26 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.866 07:18:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 ************************************ 00:12:54.125 START TEST nvmf_target_extra 00:12:54.125 ************************************ 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:54.125 * Looking for test storage... 00:12:54.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 ************************************ 00:12:54.125 START TEST nvmf_example 00:12:54.125 ************************************ 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:54.125 * Looking for test storage... 00:12:54.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.125 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:54.126 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.126 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.126 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.126 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.385 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.386 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:02.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.602 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:02.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:02.603 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:02.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:02.603 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.603 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:02.603 altname enp217s0f0np0 00:13:02.603 altname ens818f0np0 00:13:02.603 inet 192.168.100.8/24 scope global mlx_0_0 00:13:02.603 valid_lft forever preferred_lft forever 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:02.603 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.603 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:02.603 altname enp217s0f1np1 00:13:02.603 altname ens818f1np1 00:13:02.603 inet 192.168.100.9/24 scope global mlx_0_1 00:13:02.603 valid_lft forever preferred_lft forever 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:02.603 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:02.604 192.168.100.9' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:02.604 192.168.100.9' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:02.604 192.168.100.9' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2613043 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2613043 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2613043 ']' 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.604 07:18:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.604 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.536 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:03.537 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:03.794 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.994 Initializing NVMe Controllers 00:13:15.994 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.994 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.994 Initialization complete. Launching workers. 00:13:15.994 ======================================================== 00:13:15.994 Latency(us) 00:13:15.994 Device Information : IOPS MiB/s Average min max 00:13:15.994 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26469.85 103.40 2419.39 618.47 15994.85 00:13:15.994 ======================================================== 00:13:15.994 Total : 26469.85 103.40 2419.39 618.47 15994.85 00:13:15.994 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:15.994 rmmod nvme_rdma 00:13:15.994 rmmod nvme_fabrics 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2613043 ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2613043 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2613043 ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2613043 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2613043 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2613043' 00:13:15.994 killing process with pid 2613043 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2613043 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2613043 00:13:15.994 nvmf threads initialize successfully 00:13:15.994 bdev subsystem init successfully 00:13:15.994 created a nvmf target service 00:13:15.994 create targets's poll groups done 00:13:15.994 all subsystems of target started 00:13:15.994 nvmf target is running 00:13:15.994 all subsystems of target stopped 00:13:15.994 destroy targets's poll groups done 00:13:15.994 destroyed the nvmf target service 00:13:15.994 bdev subsystem finish successfully 00:13:15.994 nvmf threads destroy successfully 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:15.994 00:13:15.994 real 0m21.107s 00:13:15.994 user 0m52.391s 00:13:15.994 sys 0m6.782s 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:15.994 ************************************ 00:13:15.994 END TEST nvmf_example 00:13:15.994 ************************************ 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.994 ************************************ 00:13:15.994 START TEST nvmf_filesystem 00:13:15.994 ************************************ 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:13:15.994 * Looking for test storage... 00:13:15.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:15.994 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:13:15.995 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:15.995 #define SPDK_CONFIG_H 00:13:15.995 #define SPDK_CONFIG_APPS 1 00:13:15.995 #define SPDK_CONFIG_ARCH native 00:13:15.995 #undef SPDK_CONFIG_ASAN 00:13:15.995 #undef SPDK_CONFIG_AVAHI 00:13:15.995 #undef SPDK_CONFIG_CET 00:13:15.995 #define SPDK_CONFIG_COVERAGE 1 00:13:15.995 #define SPDK_CONFIG_CROSS_PREFIX 00:13:15.995 #undef SPDK_CONFIG_CRYPTO 00:13:15.995 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:15.995 #undef SPDK_CONFIG_CUSTOMOCF 00:13:15.995 #undef SPDK_CONFIG_DAOS 00:13:15.995 #define SPDK_CONFIG_DAOS_DIR 00:13:15.995 #define SPDK_CONFIG_DEBUG 1 00:13:15.995 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:15.995 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:15.995 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:15.995 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:15.995 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:15.995 #undef SPDK_CONFIG_DPDK_UADK 00:13:15.995 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:15.995 #define SPDK_CONFIG_EXAMPLES 1 00:13:15.995 #undef SPDK_CONFIG_FC 00:13:15.995 #define SPDK_CONFIG_FC_PATH 00:13:15.995 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:15.995 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:15.995 #undef SPDK_CONFIG_FUSE 00:13:15.995 #undef SPDK_CONFIG_FUZZER 00:13:15.995 #define SPDK_CONFIG_FUZZER_LIB 00:13:15.995 #undef SPDK_CONFIG_GOLANG 00:13:15.995 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:15.995 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:15.995 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:15.995 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:15.995 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:15.995 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:15.995 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:15.995 #define SPDK_CONFIG_IDXD 1 00:13:15.995 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:15.995 #undef SPDK_CONFIG_IPSEC_MB 00:13:15.995 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:15.995 #define SPDK_CONFIG_ISAL 1 00:13:15.995 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:15.995 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:15.995 #define SPDK_CONFIG_LIBDIR 00:13:15.995 #undef SPDK_CONFIG_LTO 00:13:15.995 #define SPDK_CONFIG_MAX_LCORES 128 00:13:15.995 #define SPDK_CONFIG_NVME_CUSE 1 00:13:15.995 #undef SPDK_CONFIG_OCF 00:13:15.995 #define SPDK_CONFIG_OCF_PATH 00:13:15.995 #define SPDK_CONFIG_OPENSSL_PATH 00:13:15.995 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:15.995 #define SPDK_CONFIG_PGO_DIR 00:13:15.995 #undef SPDK_CONFIG_PGO_USE 00:13:15.995 #define SPDK_CONFIG_PREFIX /usr/local 00:13:15.995 #undef SPDK_CONFIG_RAID5F 00:13:15.995 #undef SPDK_CONFIG_RBD 00:13:15.995 #define SPDK_CONFIG_RDMA 1 00:13:15.995 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:15.995 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:15.995 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:15.995 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:15.995 #define SPDK_CONFIG_SHARED 1 00:13:15.995 #undef SPDK_CONFIG_SMA 00:13:15.995 #define SPDK_CONFIG_TESTS 1 00:13:15.995 #undef SPDK_CONFIG_TSAN 00:13:15.995 #define SPDK_CONFIG_UBLK 1 00:13:15.995 #define SPDK_CONFIG_UBSAN 1 00:13:15.995 #undef SPDK_CONFIG_UNIT_TESTS 00:13:15.995 #undef SPDK_CONFIG_URING 00:13:15.995 #define SPDK_CONFIG_URING_PATH 00:13:15.995 #undef SPDK_CONFIG_URING_ZNS 00:13:15.995 #undef SPDK_CONFIG_USDT 00:13:15.995 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:15.995 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:15.995 #undef SPDK_CONFIG_VFIO_USER 00:13:15.995 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:15.995 #define SPDK_CONFIG_VHOST 1 00:13:15.995 #define SPDK_CONFIG_VIRTIO 1 00:13:15.995 #undef SPDK_CONFIG_VTUNE 00:13:15.995 #define SPDK_CONFIG_VTUNE_DIR 00:13:15.995 #define SPDK_CONFIG_WERROR 1 00:13:15.995 #define SPDK_CONFIG_WPDK_DIR 00:13:15.995 #undef SPDK_CONFIG_XNVME 00:13:15.996 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:15.996 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:15.997 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=rdma 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2615252 ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2615252 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.DcXRUK 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DcXRUK/tests/target /tmp/spdk.DcXRUK 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=951066624 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4333363200 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=50931187712 00:13:15.998 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10811088896 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30805581824 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=65556480 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325023744 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23433216 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30866366464 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4771840 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:13:15.999 * Looking for test storage... 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.999 07:18:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=50931187712 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13025681408 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.999 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.000 07:18:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:24.118 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:24.118 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:24.118 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:24.118 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.118 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:24.119 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.119 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:24.119 altname enp217s0f0np0 00:13:24.119 altname ens818f0np0 00:13:24.119 inet 192.168.100.8/24 scope global mlx_0_0 00:13:24.119 valid_lft forever preferred_lft forever 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:24.119 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.119 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:24.119 altname enp217s0f1np1 00:13:24.119 altname ens818f1np1 00:13:24.119 inet 192.168.100.9/24 scope global mlx_0_1 00:13:24.119 valid_lft forever preferred_lft forever 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:24.119 192.168.100.9' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:24.119 192.168.100.9' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:24.119 192.168.100.9' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.119 ************************************ 00:13:24.119 START TEST nvmf_filesystem_no_in_capsule 00:13:24.119 ************************************ 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2619263 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2619263 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2619263 ']' 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.119 07:18:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.120 [2024-07-25 07:18:56.408992] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:24.120 [2024-07-25 07:18:56.409055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.120 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.120 [2024-07-25 07:18:56.495603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.120 [2024-07-25 07:18:56.567917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.120 [2024-07-25 07:18:56.567960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.120 [2024-07-25 07:18:56.567970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.120 [2024-07-25 07:18:56.567978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.120 [2024-07-25 07:18:56.567986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.120 [2024-07-25 07:18:56.568048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.120 [2024-07-25 07:18:56.568145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.120 [2024-07-25 07:18:56.568162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.120 [2024-07-25 07:18:56.568163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 [2024-07-25 07:18:57.276007] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:25.058 [2024-07-25 07:18:57.298481] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xef5dd0/0xefa2c0) succeed. 00:13:25.058 [2024-07-25 07:18:57.307994] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xef7410/0xf3b950) succeed. 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 [2024-07-25 07:18:57.544688] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.058 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:25.058 { 00:13:25.058 "name": "Malloc1", 00:13:25.058 "aliases": [ 00:13:25.058 "ad5560d6-f889-41ae-afa4-81e9596cdb1f" 00:13:25.058 ], 00:13:25.058 "product_name": "Malloc disk", 00:13:25.058 "block_size": 512, 00:13:25.058 "num_blocks": 1048576, 00:13:25.058 "uuid": "ad5560d6-f889-41ae-afa4-81e9596cdb1f", 00:13:25.058 "assigned_rate_limits": { 00:13:25.058 "rw_ios_per_sec": 0, 00:13:25.058 "rw_mbytes_per_sec": 0, 00:13:25.058 "r_mbytes_per_sec": 0, 00:13:25.058 "w_mbytes_per_sec": 0 00:13:25.058 }, 00:13:25.058 "claimed": true, 00:13:25.058 "claim_type": "exclusive_write", 00:13:25.058 "zoned": false, 00:13:25.058 "supported_io_types": { 00:13:25.058 "read": true, 00:13:25.058 "write": true, 00:13:25.058 "unmap": true, 00:13:25.058 "flush": true, 00:13:25.058 "reset": true, 00:13:25.058 "nvme_admin": false, 00:13:25.058 "nvme_io": false, 00:13:25.058 "nvme_io_md": false, 00:13:25.058 "write_zeroes": true, 00:13:25.058 "zcopy": true, 00:13:25.059 "get_zone_info": false, 00:13:25.059 "zone_management": false, 00:13:25.059 "zone_append": false, 00:13:25.059 "compare": false, 00:13:25.059 "compare_and_write": false, 00:13:25.059 "abort": true, 00:13:25.059 "seek_hole": false, 00:13:25.059 "seek_data": false, 00:13:25.059 "copy": true, 00:13:25.059 "nvme_iov_md": false 00:13:25.059 }, 00:13:25.059 "memory_domains": [ 00:13:25.059 { 00:13:25.059 "dma_device_id": "system", 00:13:25.059 "dma_device_type": 1 00:13:25.059 }, 00:13:25.059 { 00:13:25.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.059 "dma_device_type": 2 00:13:25.059 } 00:13:25.059 ], 00:13:25.059 "driver_specific": {} 00:13:25.059 } 00:13:25.059 ]' 00:13:25.059 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:25.318 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:25.319 07:18:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:26.256 07:18:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.256 07:18:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.256 07:18:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.256 07:18:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:26.256 07:18:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:28.162 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:28.421 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:28.421 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:28.421 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:28.680 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.617 ************************************ 00:13:29.617 START TEST filesystem_ext4 00:13:29.617 ************************************ 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:29.617 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:29.617 mke2fs 1.46.5 (30-Dec-2021) 00:13:29.617 Discarding device blocks: 0/522240 done 00:13:29.617 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:29.617 Filesystem UUID: 04993f8d-800a-48eb-9fc8-12e58fd281c1 00:13:29.617 Superblock backups stored on blocks: 00:13:29.617 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:29.617 00:13:29.617 Allocating group tables: 0/64 done 00:13:29.617 Writing inode tables: 0/64 done 00:13:29.617 Creating journal (8192 blocks): done 00:13:29.618 Writing superblocks and filesystem accounting information: 0/64 done 00:13:29.618 00:13:29.618 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:29.618 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2619263 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:29.877 00:13:29.877 real 0m0.190s 00:13:29.877 user 0m0.027s 00:13:29.877 sys 0m0.081s 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:29.877 ************************************ 00:13:29.877 END TEST filesystem_ext4 00:13:29.877 ************************************ 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.877 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.877 ************************************ 00:13:29.877 START TEST filesystem_btrfs 00:13:29.877 ************************************ 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:29.878 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:30.177 btrfs-progs v6.6.2 00:13:30.177 See https://btrfs.readthedocs.io for more information. 00:13:30.177 00:13:30.177 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:30.177 NOTE: several default settings have changed in version 5.15, please make sure 00:13:30.177 this does not affect your deployments: 00:13:30.177 - DUP for metadata (-m dup) 00:13:30.177 - enabled no-holes (-O no-holes) 00:13:30.177 - enabled free-space-tree (-R free-space-tree) 00:13:30.177 00:13:30.177 Label: (null) 00:13:30.177 UUID: e3a70f3b-977f-462d-b654-107568bf1d6c 00:13:30.177 Node size: 16384 00:13:30.177 Sector size: 4096 00:13:30.177 Filesystem size: 510.00MiB 00:13:30.177 Block group profiles: 00:13:30.177 Data: single 8.00MiB 00:13:30.177 Metadata: DUP 32.00MiB 00:13:30.177 System: DUP 8.00MiB 00:13:30.177 SSD detected: yes 00:13:30.177 Zoned device: no 00:13:30.177 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:30.177 Runtime features: free-space-tree 00:13:30.177 Checksum: crc32c 00:13:30.177 Number of devices: 1 00:13:30.177 Devices: 00:13:30.177 ID SIZE PATH 00:13:30.177 1 510.00MiB /dev/nvme0n1p1 00:13:30.177 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2619263 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.177 00:13:30.177 real 0m0.258s 00:13:30.177 user 0m0.032s 00:13:30.177 sys 0m0.136s 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.177 ************************************ 00:13:30.177 END TEST filesystem_btrfs 00:13:30.177 ************************************ 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.177 ************************************ 00:13:30.177 START TEST filesystem_xfs 00:13:30.177 ************************************ 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:30.177 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:30.436 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:30.436 = sectsz=512 attr=2, projid32bit=1 00:13:30.436 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:30.436 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:30.436 data = bsize=4096 blocks=130560, imaxpct=25 00:13:30.436 = sunit=0 swidth=0 blks 00:13:30.436 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:30.436 log =internal log bsize=4096 blocks=16384, version=2 00:13:30.436 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:30.436 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:30.436 Discarding blocks...Done. 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.436 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2619263 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.437 00:13:30.437 real 0m0.214s 00:13:30.437 user 0m0.019s 00:13:30.437 sys 0m0.091s 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.437 ************************************ 00:13:30.437 END TEST filesystem_xfs 00:13:30.437 ************************************ 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:30.437 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.375 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2619263 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2619263 ']' 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2619263 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2619263 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2619263' 00:13:31.635 killing process with pid 2619263 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2619263 00:13:31.635 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2619263 00:13:31.894 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:31.894 00:13:31.894 real 0m8.021s 00:13:31.894 user 0m31.251s 00:13:31.894 sys 0m1.275s 00:13:31.894 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.894 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.895 ************************************ 00:13:31.895 END TEST nvmf_filesystem_no_in_capsule 00:13:31.895 ************************************ 00:13:31.895 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:31.895 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.895 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.895 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 ************************************ 00:13:32.154 START TEST nvmf_filesystem_in_capsule 00:13:32.154 ************************************ 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2620930 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2620930 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2620930 ']' 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.154 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 [2024-07-25 07:19:04.513867] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:32.154 [2024-07-25 07:19:04.513914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.154 [2024-07-25 07:19:04.595135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.154 [2024-07-25 07:19:04.668101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.154 [2024-07-25 07:19:04.668138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.154 [2024-07-25 07:19:04.668147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.154 [2024-07-25 07:19:04.668155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.154 [2024-07-25 07:19:04.668162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.154 [2024-07-25 07:19:04.668255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.154 [2024-07-25 07:19:04.668351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.154 [2024-07-25 07:19:04.668434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.154 [2024-07-25 07:19:04.668436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 [2024-07-25 07:19:05.398804] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x83fdd0/0x8442c0) succeed. 00:13:33.090 [2024-07-25 07:19:05.407982] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x841410/0x885950) succeed. 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.090 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.349 Malloc1 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.349 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.350 [2024-07-25 07:19:05.670401] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:33.350 { 00:13:33.350 "name": "Malloc1", 00:13:33.350 "aliases": [ 00:13:33.350 "3c270d8e-2d40-4040-ab8a-01c81ba54ff3" 00:13:33.350 ], 00:13:33.350 "product_name": "Malloc disk", 00:13:33.350 "block_size": 512, 00:13:33.350 "num_blocks": 1048576, 00:13:33.350 "uuid": "3c270d8e-2d40-4040-ab8a-01c81ba54ff3", 00:13:33.350 "assigned_rate_limits": { 00:13:33.350 "rw_ios_per_sec": 0, 00:13:33.350 "rw_mbytes_per_sec": 0, 00:13:33.350 "r_mbytes_per_sec": 0, 00:13:33.350 "w_mbytes_per_sec": 0 00:13:33.350 }, 00:13:33.350 "claimed": true, 00:13:33.350 "claim_type": "exclusive_write", 00:13:33.350 "zoned": false, 00:13:33.350 "supported_io_types": { 00:13:33.350 "read": true, 00:13:33.350 "write": true, 00:13:33.350 "unmap": true, 00:13:33.350 "flush": true, 00:13:33.350 "reset": true, 00:13:33.350 "nvme_admin": false, 00:13:33.350 "nvme_io": false, 00:13:33.350 "nvme_io_md": false, 00:13:33.350 "write_zeroes": true, 00:13:33.350 "zcopy": true, 00:13:33.350 "get_zone_info": false, 00:13:33.350 "zone_management": false, 00:13:33.350 "zone_append": false, 00:13:33.350 "compare": false, 00:13:33.350 "compare_and_write": false, 00:13:33.350 "abort": true, 00:13:33.350 "seek_hole": false, 00:13:33.350 "seek_data": false, 00:13:33.350 "copy": true, 00:13:33.350 "nvme_iov_md": false 00:13:33.350 }, 00:13:33.350 "memory_domains": [ 00:13:33.350 { 00:13:33.350 "dma_device_id": "system", 00:13:33.350 "dma_device_type": 1 00:13:33.350 }, 00:13:33.350 { 00:13:33.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.350 "dma_device_type": 2 00:13:33.350 } 00:13:33.350 ], 00:13:33.350 "driver_specific": {} 00:13:33.350 } 00:13:33.350 ]' 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:33.350 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:34.288 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.288 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:34.288 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.288 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:34.288 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:36.823 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:36.824 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:36.824 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:36.824 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:36.824 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:36.824 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.762 ************************************ 00:13:37.762 START TEST filesystem_in_capsule_ext4 00:13:37.762 ************************************ 00:13:37.762 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:37.763 mke2fs 1.46.5 (30-Dec-2021) 00:13:37.763 Discarding device blocks: 0/522240 done 00:13:37.763 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:37.763 Filesystem UUID: 996fc993-48d2-4aa7-95d5-8d395271a6a1 00:13:37.763 Superblock backups stored on blocks: 00:13:37.763 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:37.763 00:13:37.763 Allocating group tables: 0/64 done 00:13:37.763 Writing inode tables: 0/64 done 00:13:37.763 Creating journal (8192 blocks): done 00:13:37.763 Writing superblocks and filesystem accounting information: 0/64 done 00:13:37.763 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:37.763 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2620930 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.022 00:13:38.022 real 0m0.184s 00:13:38.022 user 0m0.030s 00:13:38.022 sys 0m0.071s 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:38.022 ************************************ 00:13:38.022 END TEST filesystem_in_capsule_ext4 00:13:38.022 ************************************ 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.022 ************************************ 00:13:38.022 START TEST filesystem_in_capsule_btrfs 00:13:38.022 ************************************ 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:38.022 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:38.022 btrfs-progs v6.6.2 00:13:38.022 See https://btrfs.readthedocs.io for more information. 00:13:38.022 00:13:38.022 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:38.022 NOTE: several default settings have changed in version 5.15, please make sure 00:13:38.022 this does not affect your deployments: 00:13:38.022 - DUP for metadata (-m dup) 00:13:38.022 - enabled no-holes (-O no-holes) 00:13:38.023 - enabled free-space-tree (-R free-space-tree) 00:13:38.023 00:13:38.023 Label: (null) 00:13:38.023 UUID: 0124effe-0c83-48ad-a157-56756cbe3b82 00:13:38.023 Node size: 16384 00:13:38.023 Sector size: 4096 00:13:38.023 Filesystem size: 510.00MiB 00:13:38.023 Block group profiles: 00:13:38.023 Data: single 8.00MiB 00:13:38.023 Metadata: DUP 32.00MiB 00:13:38.023 System: DUP 8.00MiB 00:13:38.023 SSD detected: yes 00:13:38.023 Zoned device: no 00:13:38.023 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:38.023 Runtime features: free-space-tree 00:13:38.023 Checksum: crc32c 00:13:38.023 Number of devices: 1 00:13:38.023 Devices: 00:13:38.023 ID SIZE PATH 00:13:38.023 1 510.00MiB /dev/nvme0n1p1 00:13:38.023 00:13:38.023 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:38.023 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2620930 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.283 00:13:38.283 real 0m0.261s 00:13:38.283 user 0m0.032s 00:13:38.283 sys 0m0.138s 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:38.283 ************************************ 00:13:38.283 END TEST filesystem_in_capsule_btrfs 00:13:38.283 ************************************ 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.283 ************************************ 00:13:38.283 START TEST filesystem_in_capsule_xfs 00:13:38.283 ************************************ 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:38.283 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:38.542 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:38.542 = sectsz=512 attr=2, projid32bit=1 00:13:38.542 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:38.542 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:38.542 data = bsize=4096 blocks=130560, imaxpct=25 00:13:38.542 = sunit=0 swidth=0 blks 00:13:38.542 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:38.542 log =internal log bsize=4096 blocks=16384, version=2 00:13:38.542 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:38.542 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:38.542 Discarding blocks...Done. 00:13:38.542 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:38.542 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.542 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.542 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2620930 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.543 00:13:38.543 real 0m0.205s 00:13:38.543 user 0m0.035s 00:13:38.543 sys 0m0.073s 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.543 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:38.543 ************************************ 00:13:38.543 END TEST filesystem_in_capsule_xfs 00:13:38.543 ************************************ 00:13:38.543 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:38.543 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:38.543 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2620930 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2620930 ']' 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2620930 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2620930 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2620930' 00:13:39.923 killing process with pid 2620930 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2620930 00:13:39.923 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2620930 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:40.183 00:13:40.183 real 0m8.079s 00:13:40.183 user 0m31.444s 00:13:40.183 sys 0m1.267s 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.183 ************************************ 00:13:40.183 END TEST nvmf_filesystem_in_capsule 00:13:40.183 ************************************ 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:40.183 rmmod nvme_rdma 00:13:40.183 rmmod nvme_fabrics 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:40.183 00:13:40.183 real 0m24.891s 00:13:40.183 user 1m5.255s 00:13:40.183 sys 0m9.032s 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.183 ************************************ 00:13:40.183 END TEST nvmf_filesystem 00:13:40.183 ************************************ 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.183 ************************************ 00:13:40.183 START TEST nvmf_target_discovery 00:13:40.183 ************************************ 00:13:40.183 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:40.443 * Looking for test storage... 00:13:40.443 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:40.443 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.444 07:19:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:48.570 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:48.570 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:48.570 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.570 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:48.570 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:48.571 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.571 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:48.571 altname enp217s0f0np0 00:13:48.571 altname ens818f0np0 00:13:48.571 inet 192.168.100.8/24 scope global mlx_0_0 00:13:48.571 valid_lft forever preferred_lft forever 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:48.571 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.571 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:48.571 altname enp217s0f1np1 00:13:48.571 altname ens818f1np1 00:13:48.571 inet 192.168.100.9/24 scope global mlx_0_1 00:13:48.571 valid_lft forever preferred_lft forever 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:48.571 07:19:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:48.571 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:48.572 192.168.100.9' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:48.572 192.168.100.9' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:48.572 192.168.100.9' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2626396 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2626396 00:13:48.572 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2626396 ']' 00:13:48.833 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.833 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.833 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.833 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.833 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.833 [2024-07-25 07:19:21.143504] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:48.833 [2024-07-25 07:19:21.143551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.833 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.833 [2024-07-25 07:19:21.226086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.833 [2024-07-25 07:19:21.295822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.833 [2024-07-25 07:19:21.295863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.833 [2024-07-25 07:19:21.295872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.833 [2024-07-25 07:19:21.295880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.833 [2024-07-25 07:19:21.295887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.833 [2024-07-25 07:19:21.295981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.833 [2024-07-25 07:19:21.296077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.833 [2024-07-25 07:19:21.296139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.833 [2024-07-25 07:19:21.296140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:49.774 07:19:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 [2024-07-25 07:19:22.027891] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11f4dd0/0x11f92c0) succeed. 00:13:49.774 [2024-07-25 07:19:22.037197] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11f6410/0x123a950) succeed. 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 Null1 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 [2024-07-25 07:19:22.199749] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 Null2 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 Null3 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 Null4 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.774 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.075 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:50.075 00:13:50.075 Discovery Log Number of Records 6, Generation counter 6 00:13:50.075 =====Discovery Log Entry 0====== 00:13:50.075 trtype: rdma 00:13:50.075 adrfam: ipv4 00:13:50.075 subtype: current discovery subsystem 00:13:50.075 treq: not required 00:13:50.075 portid: 0 00:13:50.075 trsvcid: 4420 00:13:50.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:50.075 traddr: 192.168.100.8 00:13:50.075 eflags: explicit discovery connections, duplicate discovery information 00:13:50.075 rdma_prtype: not specified 00:13:50.075 rdma_qptype: connected 00:13:50.075 rdma_cms: rdma-cm 00:13:50.075 rdma_pkey: 0x0000 00:13:50.075 =====Discovery Log Entry 1====== 00:13:50.075 trtype: rdma 00:13:50.075 adrfam: ipv4 00:13:50.075 subtype: nvme subsystem 00:13:50.075 treq: not required 00:13:50.075 portid: 0 00:13:50.075 trsvcid: 4420 00:13:50.075 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:50.075 traddr: 192.168.100.8 00:13:50.075 eflags: none 00:13:50.075 rdma_prtype: not specified 00:13:50.075 rdma_qptype: connected 00:13:50.075 rdma_cms: rdma-cm 00:13:50.075 rdma_pkey: 0x0000 00:13:50.075 =====Discovery Log Entry 2====== 00:13:50.075 trtype: rdma 00:13:50.075 adrfam: ipv4 00:13:50.075 subtype: nvme subsystem 00:13:50.075 treq: not required 00:13:50.075 portid: 0 00:13:50.075 trsvcid: 4420 00:13:50.075 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:50.075 traddr: 192.168.100.8 00:13:50.075 eflags: none 00:13:50.075 rdma_prtype: not specified 00:13:50.075 rdma_qptype: connected 00:13:50.075 rdma_cms: rdma-cm 00:13:50.075 rdma_pkey: 0x0000 00:13:50.075 =====Discovery Log Entry 3====== 00:13:50.075 trtype: rdma 00:13:50.075 adrfam: ipv4 00:13:50.075 subtype: nvme subsystem 00:13:50.075 treq: not required 00:13:50.075 portid: 0 00:13:50.075 trsvcid: 4420 00:13:50.075 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:50.075 traddr: 192.168.100.8 00:13:50.075 eflags: none 00:13:50.075 rdma_prtype: not specified 00:13:50.075 rdma_qptype: connected 00:13:50.075 rdma_cms: rdma-cm 00:13:50.075 rdma_pkey: 0x0000 00:13:50.075 =====Discovery Log Entry 4====== 00:13:50.075 trtype: rdma 00:13:50.075 adrfam: ipv4 00:13:50.075 subtype: nvme subsystem 00:13:50.075 treq: not required 00:13:50.075 portid: 0 00:13:50.075 trsvcid: 4420 00:13:50.075 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:50.075 traddr: 192.168.100.8 00:13:50.076 eflags: none 00:13:50.076 rdma_prtype: not specified 00:13:50.076 rdma_qptype: connected 00:13:50.076 rdma_cms: rdma-cm 00:13:50.076 rdma_pkey: 0x0000 00:13:50.076 =====Discovery Log Entry 5====== 00:13:50.076 trtype: rdma 00:13:50.076 adrfam: ipv4 00:13:50.076 subtype: discovery subsystem referral 00:13:50.076 treq: not required 00:13:50.076 portid: 0 00:13:50.076 trsvcid: 4430 00:13:50.076 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:50.076 traddr: 192.168.100.8 00:13:50.076 eflags: none 00:13:50.076 rdma_prtype: unrecognized 00:13:50.076 rdma_qptype: unrecognized 00:13:50.076 rdma_cms: unrecognized 00:13:50.076 rdma_pkey: 0x0000 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:50.076 Perform nvmf subsystem discovery via RPC 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.076 [ 00:13:50.076 { 00:13:50.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:50.076 "subtype": "Discovery", 00:13:50.076 "listen_addresses": [ 00:13:50.076 { 00:13:50.076 "trtype": "RDMA", 00:13:50.076 "adrfam": "IPv4", 00:13:50.076 "traddr": "192.168.100.8", 00:13:50.076 "trsvcid": "4420" 00:13:50.076 } 00:13:50.076 ], 00:13:50.076 "allow_any_host": true, 00:13:50.076 "hosts": [] 00:13:50.076 }, 00:13:50.076 { 00:13:50.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.076 "subtype": "NVMe", 00:13:50.076 "listen_addresses": [ 00:13:50.076 { 00:13:50.076 "trtype": "RDMA", 00:13:50.076 "adrfam": "IPv4", 00:13:50.076 "traddr": "192.168.100.8", 00:13:50.076 "trsvcid": "4420" 00:13:50.076 } 00:13:50.076 ], 00:13:50.076 "allow_any_host": true, 00:13:50.076 "hosts": [], 00:13:50.076 "serial_number": "SPDK00000000000001", 00:13:50.076 "model_number": "SPDK bdev Controller", 00:13:50.076 "max_namespaces": 32, 00:13:50.076 "min_cntlid": 1, 00:13:50.076 "max_cntlid": 65519, 00:13:50.076 "namespaces": [ 00:13:50.076 { 00:13:50.076 "nsid": 1, 00:13:50.076 "bdev_name": "Null1", 00:13:50.076 "name": "Null1", 00:13:50.076 "nguid": "D23176AE6CA042509114B339DEE64862", 00:13:50.076 "uuid": "d23176ae-6ca0-4250-9114-b339dee64862" 00:13:50.076 } 00:13:50.076 ] 00:13:50.076 }, 00:13:50.076 { 00:13:50.076 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:50.076 "subtype": "NVMe", 00:13:50.076 "listen_addresses": [ 00:13:50.076 { 00:13:50.076 "trtype": "RDMA", 00:13:50.076 "adrfam": "IPv4", 00:13:50.076 "traddr": "192.168.100.8", 00:13:50.076 "trsvcid": "4420" 00:13:50.076 } 00:13:50.076 ], 00:13:50.076 "allow_any_host": true, 00:13:50.076 "hosts": [], 00:13:50.076 "serial_number": "SPDK00000000000002", 00:13:50.076 "model_number": "SPDK bdev Controller", 00:13:50.076 "max_namespaces": 32, 00:13:50.076 "min_cntlid": 1, 00:13:50.076 "max_cntlid": 65519, 00:13:50.076 "namespaces": [ 00:13:50.076 { 00:13:50.076 "nsid": 1, 00:13:50.076 "bdev_name": "Null2", 00:13:50.076 "name": "Null2", 00:13:50.076 "nguid": "C76488FC543048ACBDCF3558003B0B19", 00:13:50.076 "uuid": "c76488fc-5430-48ac-bdcf-3558003b0b19" 00:13:50.076 } 00:13:50.076 ] 00:13:50.076 }, 00:13:50.076 { 00:13:50.076 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:50.076 "subtype": "NVMe", 00:13:50.076 "listen_addresses": [ 00:13:50.076 { 00:13:50.076 "trtype": "RDMA", 00:13:50.076 "adrfam": "IPv4", 00:13:50.076 "traddr": "192.168.100.8", 00:13:50.076 "trsvcid": "4420" 00:13:50.076 } 00:13:50.076 ], 00:13:50.076 "allow_any_host": true, 00:13:50.076 "hosts": [], 00:13:50.076 "serial_number": "SPDK00000000000003", 00:13:50.076 "model_number": "SPDK bdev Controller", 00:13:50.076 "max_namespaces": 32, 00:13:50.076 "min_cntlid": 1, 00:13:50.076 "max_cntlid": 65519, 00:13:50.076 "namespaces": [ 00:13:50.076 { 00:13:50.076 "nsid": 1, 00:13:50.076 "bdev_name": "Null3", 00:13:50.076 "name": "Null3", 00:13:50.076 "nguid": "59B915C5314846F1B9BC7007C2A8009E", 00:13:50.076 "uuid": "59b915c5-3148-46f1-b9bc-7007c2a8009e" 00:13:50.076 } 00:13:50.076 ] 00:13:50.076 }, 00:13:50.076 { 00:13:50.076 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:50.076 "subtype": "NVMe", 00:13:50.076 "listen_addresses": [ 00:13:50.076 { 00:13:50.076 "trtype": "RDMA", 00:13:50.076 "adrfam": "IPv4", 00:13:50.076 "traddr": "192.168.100.8", 00:13:50.076 "trsvcid": "4420" 00:13:50.076 } 00:13:50.076 ], 00:13:50.076 "allow_any_host": true, 00:13:50.076 "hosts": [], 00:13:50.076 "serial_number": "SPDK00000000000004", 00:13:50.076 "model_number": "SPDK bdev Controller", 00:13:50.076 "max_namespaces": 32, 00:13:50.076 "min_cntlid": 1, 00:13:50.076 "max_cntlid": 65519, 00:13:50.076 "namespaces": [ 00:13:50.076 { 00:13:50.076 "nsid": 1, 00:13:50.076 "bdev_name": "Null4", 00:13:50.076 "name": "Null4", 00:13:50.076 "nguid": "D0C8E0C83A7C48E2830967A8052B4480", 00:13:50.076 "uuid": "d0c8e0c8-3a7c-48e2-8309-67a8052b4480" 00:13:50.076 } 00:13:50.076 ] 00:13:50.076 } 00:13:50.076 ] 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.076 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:50.350 rmmod nvme_rdma 00:13:50.350 rmmod nvme_fabrics 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2626396 ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2626396 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2626396 ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2626396 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2626396 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2626396' 00:13:50.350 killing process with pid 2626396 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2626396 00:13:50.350 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2626396 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:50.611 00:13:50.611 real 0m10.234s 00:13:50.611 user 0m8.874s 00:13:50.611 sys 0m6.798s 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.611 ************************************ 00:13:50.611 END TEST nvmf_target_discovery 00:13:50.611 ************************************ 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.611 07:19:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.611 ************************************ 00:13:50.611 START TEST nvmf_referrals 00:13:50.611 ************************************ 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:50.611 * Looking for test storage... 00:13:50.611 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.611 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.871 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.872 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:59.002 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:59.002 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:59.002 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:59.002 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:59.002 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:59.003 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:59.003 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:59.003 altname enp217s0f0np0 00:13:59.003 altname ens818f0np0 00:13:59.003 inet 192.168.100.8/24 scope global mlx_0_0 00:13:59.003 valid_lft forever preferred_lft forever 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:59.003 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:59.003 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:59.003 altname enp217s0f1np1 00:13:59.003 altname ens818f1np1 00:13:59.003 inet 192.168.100.9/24 scope global mlx_0_1 00:13:59.003 valid_lft forever preferred_lft forever 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:59.003 192.168.100.9' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:59.003 192.168.100.9' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:59.003 192.168.100.9' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.003 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2630751 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2630751 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2630751 ']' 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.004 07:19:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 [2024-07-25 07:19:31.461149] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:59.004 [2024-07-25 07:19:31.461203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.004 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.263 [2024-07-25 07:19:31.547538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.263 [2024-07-25 07:19:31.617720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.263 [2024-07-25 07:19:31.617764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.263 [2024-07-25 07:19:31.617773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.263 [2024-07-25 07:19:31.617785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.263 [2024-07-25 07:19:31.617792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.263 [2024-07-25 07:19:31.617848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.263 [2024-07-25 07:19:31.617945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.263 [2024-07-25 07:19:31.618017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.263 [2024-07-25 07:19:31.618019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.832 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.832 [2024-07-25 07:19:32.340288] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb3add0/0xb3f2c0) succeed. 00:13:59.832 [2024-07-25 07:19:32.349516] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb3c410/0xb80950) succeed. 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-07-25 07:19:32.472252] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.091 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:14:00.351 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:00.610 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:00.611 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.611 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.611 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.611 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.611 07:19:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.611 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.870 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:01.130 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.389 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:01.389 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:01.389 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:01.390 rmmod nvme_rdma 00:14:01.390 rmmod nvme_fabrics 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2630751 ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2630751 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2630751 ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2630751 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2630751 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2630751' 00:14:01.390 killing process with pid 2630751 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2630751 00:14:01.390 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2630751 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:01.649 00:14:01.649 real 0m11.120s 00:14:01.649 user 0m12.888s 00:14:01.649 sys 0m7.283s 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 ************************************ 00:14:01.649 END TEST nvmf_referrals 00:14:01.649 ************************************ 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.649 07:19:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.909 ************************************ 00:14:01.909 START TEST nvmf_connect_disconnect 00:14:01.909 ************************************ 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:14:01.909 * Looking for test storage... 00:14:01.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.909 07:19:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:10.039 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:10.039 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:10.300 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:10.300 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:10.300 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:10.300 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:10.301 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:10.301 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:10.301 altname enp217s0f0np0 00:14:10.301 altname ens818f0np0 00:14:10.301 inet 192.168.100.8/24 scope global mlx_0_0 00:14:10.301 valid_lft forever preferred_lft forever 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:10.301 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:10.301 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:10.301 altname enp217s0f1np1 00:14:10.301 altname ens818f1np1 00:14:10.301 inet 192.168.100.9/24 scope global mlx_0_1 00:14:10.301 valid_lft forever preferred_lft forever 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:10.301 192.168.100.9' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:10.301 192.168.100.9' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:10.301 192.168.100.9' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.301 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2635387 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2635387 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2635387 ']' 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.561 07:19:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.561 [2024-07-25 07:19:42.882596] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:10.561 [2024-07-25 07:19:42.882655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.561 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.561 [2024-07-25 07:19:42.965970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.561 [2024-07-25 07:19:43.036433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.561 [2024-07-25 07:19:43.036477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.561 [2024-07-25 07:19:43.036486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.561 [2024-07-25 07:19:43.036494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.561 [2024-07-25 07:19:43.036501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.561 [2024-07-25 07:19:43.036557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.561 [2024-07-25 07:19:43.036661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.561 [2024-07-25 07:19:43.036749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.561 [2024-07-25 07:19:43.036752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 [2024-07-25 07:19:43.746895] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:14:11.499 [2024-07-25 07:19:43.768420] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x657dd0/0x65c2c0) succeed. 00:14:11.499 [2024-07-25 07:19:43.777691] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x659410/0x69d950) succeed. 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 [2024-07-25 07:19:43.917403] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:11.499 07:19:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:15.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:31.636 rmmod nvme_rdma 00:14:31.636 rmmod nvme_fabrics 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2635387 ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2635387 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2635387 ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2635387 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2635387 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2635387' 00:14:31.636 killing process with pid 2635387 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2635387 00:14:31.636 07:20:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2635387 00:14:31.636 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.636 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:31.636 00:14:31.636 real 0m29.957s 00:14:31.636 user 1m26.495s 00:14:31.636 sys 0m7.452s 00:14:31.636 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.636 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.636 ************************************ 00:14:31.636 END TEST nvmf_connect_disconnect 00:14:31.636 ************************************ 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.896 ************************************ 00:14:31.896 START TEST nvmf_multitarget 00:14:31.896 ************************************ 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:31.896 * Looking for test storage... 00:14:31.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.896 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.897 07:20:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:40.024 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:40.024 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.024 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:40.025 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:40.025 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.025 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.284 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.284 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.284 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.284 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.284 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:40.285 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.285 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:40.285 altname enp217s0f0np0 00:14:40.285 altname ens818f0np0 00:14:40.285 inet 192.168.100.8/24 scope global mlx_0_0 00:14:40.285 valid_lft forever preferred_lft forever 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:40.285 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.285 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:40.285 altname enp217s0f1np1 00:14:40.285 altname ens818f1np1 00:14:40.285 inet 192.168.100.9/24 scope global mlx_0_1 00:14:40.285 valid_lft forever preferred_lft forever 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:40.285 192.168.100.9' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:40.285 192.168.100.9' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:40.285 192.168.100.9' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2643124 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2643124 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2643124 ']' 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.285 07:20:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:40.285 [2024-07-25 07:20:12.783710] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:40.285 [2024-07-25 07:20:12.783759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.545 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.545 [2024-07-25 07:20:12.867084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.545 [2024-07-25 07:20:12.941183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.545 [2024-07-25 07:20:12.941217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.545 [2024-07-25 07:20:12.941226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.545 [2024-07-25 07:20:12.941234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.545 [2024-07-25 07:20:12.941257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.545 [2024-07-25 07:20:12.941301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.545 [2024-07-25 07:20:12.941392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.545 [2024-07-25 07:20:12.941479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.545 [2024-07-25 07:20:12.941481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.113 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.113 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:41.113 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.113 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.113 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:41.372 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.372 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:41.372 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:41.372 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:41.373 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:41.373 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:41.373 "nvmf_tgt_1" 00:14:41.373 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:41.632 "nvmf_tgt_2" 00:14:41.632 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:41.632 07:20:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:41.632 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:41.632 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:41.632 true 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:41.959 true 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:41.959 rmmod nvme_rdma 00:14:41.959 rmmod nvme_fabrics 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2643124 ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2643124 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2643124 ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2643124 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2643124 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2643124' 00:14:41.959 killing process with pid 2643124 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2643124 00:14:41.959 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2643124 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:42.219 00:14:42.219 real 0m10.418s 00:14:42.219 user 0m9.852s 00:14:42.219 sys 0m6.959s 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:42.219 ************************************ 00:14:42.219 END TEST nvmf_multitarget 00:14:42.219 ************************************ 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.219 ************************************ 00:14:42.219 START TEST nvmf_rpc 00:14:42.219 ************************************ 00:14:42.219 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:42.479 * Looking for test storage... 00:14:42.479 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.479 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:42.480 07:20:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:50.607 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:50.607 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:50.607 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:50.608 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:50.608 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:50.608 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:50.608 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:50.608 altname enp217s0f0np0 00:14:50.608 altname ens818f0np0 00:14:50.608 inet 192.168.100.8/24 scope global mlx_0_0 00:14:50.608 valid_lft forever preferred_lft forever 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.608 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:50.609 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.609 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:50.609 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:50.609 07:20:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:50.609 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:50.609 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:50.609 altname enp217s0f1np1 00:14:50.609 altname ens818f1np1 00:14:50.609 inet 192.168.100.9/24 scope global mlx_0_1 00:14:50.609 valid_lft forever preferred_lft forever 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:50.609 192.168.100.9' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:50.609 192.168.100.9' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:50.609 192.168.100.9' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2647383 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2647383 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2647383 ']' 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.609 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.869 [2024-07-25 07:20:23.166259] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:50.869 [2024-07-25 07:20:23.166321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.869 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.869 [2024-07-25 07:20:23.251863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.869 [2024-07-25 07:20:23.326020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.869 [2024-07-25 07:20:23.326058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.869 [2024-07-25 07:20:23.326067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.869 [2024-07-25 07:20:23.326075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.869 [2024-07-25 07:20:23.326097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.869 [2024-07-25 07:20:23.326138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.869 [2024-07-25 07:20:23.326232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.869 [2024-07-25 07:20:23.326319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.869 [2024-07-25 07:20:23.326320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.806 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.806 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:51.806 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.806 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.806 07:20:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:51.806 "tick_rate": 2500000000, 00:14:51.806 "poll_groups": [ 00:14:51.806 { 00:14:51.806 "name": "nvmf_tgt_poll_group_000", 00:14:51.806 "admin_qpairs": 0, 00:14:51.806 "io_qpairs": 0, 00:14:51.806 "current_admin_qpairs": 0, 00:14:51.806 "current_io_qpairs": 0, 00:14:51.806 "pending_bdev_io": 0, 00:14:51.806 "completed_nvme_io": 0, 00:14:51.806 "transports": [] 00:14:51.806 }, 00:14:51.806 { 00:14:51.806 "name": "nvmf_tgt_poll_group_001", 00:14:51.806 "admin_qpairs": 0, 00:14:51.806 "io_qpairs": 0, 00:14:51.806 "current_admin_qpairs": 0, 00:14:51.806 "current_io_qpairs": 0, 00:14:51.806 "pending_bdev_io": 0, 00:14:51.806 "completed_nvme_io": 0, 00:14:51.806 "transports": [] 00:14:51.806 }, 00:14:51.806 { 00:14:51.806 "name": "nvmf_tgt_poll_group_002", 00:14:51.806 "admin_qpairs": 0, 00:14:51.806 "io_qpairs": 0, 00:14:51.806 "current_admin_qpairs": 0, 00:14:51.806 "current_io_qpairs": 0, 00:14:51.806 "pending_bdev_io": 0, 00:14:51.806 "completed_nvme_io": 0, 00:14:51.806 "transports": [] 00:14:51.806 }, 00:14:51.806 { 00:14:51.806 "name": "nvmf_tgt_poll_group_003", 00:14:51.806 "admin_qpairs": 0, 00:14:51.806 "io_qpairs": 0, 00:14:51.806 "current_admin_qpairs": 0, 00:14:51.806 "current_io_qpairs": 0, 00:14:51.806 "pending_bdev_io": 0, 00:14:51.806 "completed_nvme_io": 0, 00:14:51.806 "transports": [] 00:14:51.806 } 00:14:51.806 ] 00:14:51.806 }' 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:51.806 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.807 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.807 [2024-07-25 07:20:24.167262] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa48e30/0xa4d320) succeed. 00:14:51.807 [2024-07-25 07:20:24.176693] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa4a470/0xa8e9b0) succeed. 00:14:51.807 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.807 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:51.807 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.807 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.066 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.066 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:52.066 "tick_rate": 2500000000, 00:14:52.066 "poll_groups": [ 00:14:52.066 { 00:14:52.067 "name": "nvmf_tgt_poll_group_000", 00:14:52.067 "admin_qpairs": 0, 00:14:52.067 "io_qpairs": 0, 00:14:52.067 "current_admin_qpairs": 0, 00:14:52.067 "current_io_qpairs": 0, 00:14:52.067 "pending_bdev_io": 0, 00:14:52.067 "completed_nvme_io": 0, 00:14:52.067 "transports": [ 00:14:52.067 { 00:14:52.067 "trtype": "RDMA", 00:14:52.067 "pending_data_buffer": 0, 00:14:52.067 "devices": [ 00:14:52.067 { 00:14:52.067 "name": "mlx5_0", 00:14:52.067 "polls": 15393, 00:14:52.067 "idle_polls": 15393, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "mlx5_1", 00:14:52.067 "polls": 15393, 00:14:52.067 "idle_polls": 15393, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "nvmf_tgt_poll_group_001", 00:14:52.067 "admin_qpairs": 0, 00:14:52.067 "io_qpairs": 0, 00:14:52.067 "current_admin_qpairs": 0, 00:14:52.067 "current_io_qpairs": 0, 00:14:52.067 "pending_bdev_io": 0, 00:14:52.067 "completed_nvme_io": 0, 00:14:52.067 "transports": [ 00:14:52.067 { 00:14:52.067 "trtype": "RDMA", 00:14:52.067 "pending_data_buffer": 0, 00:14:52.067 "devices": [ 00:14:52.067 { 00:14:52.067 "name": "mlx5_0", 00:14:52.067 "polls": 9770, 00:14:52.067 "idle_polls": 9770, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "mlx5_1", 00:14:52.067 "polls": 9770, 00:14:52.067 "idle_polls": 9770, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "nvmf_tgt_poll_group_002", 00:14:52.067 "admin_qpairs": 0, 00:14:52.067 "io_qpairs": 0, 00:14:52.067 "current_admin_qpairs": 0, 00:14:52.067 "current_io_qpairs": 0, 00:14:52.067 "pending_bdev_io": 0, 00:14:52.067 "completed_nvme_io": 0, 00:14:52.067 "transports": [ 00:14:52.067 { 00:14:52.067 "trtype": "RDMA", 00:14:52.067 "pending_data_buffer": 0, 00:14:52.067 "devices": [ 00:14:52.067 { 00:14:52.067 "name": "mlx5_0", 00:14:52.067 "polls": 5403, 00:14:52.067 "idle_polls": 5403, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "mlx5_1", 00:14:52.067 "polls": 5403, 00:14:52.067 "idle_polls": 5403, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "nvmf_tgt_poll_group_003", 00:14:52.067 "admin_qpairs": 0, 00:14:52.067 "io_qpairs": 0, 00:14:52.067 "current_admin_qpairs": 0, 00:14:52.067 "current_io_qpairs": 0, 00:14:52.067 "pending_bdev_io": 0, 00:14:52.067 "completed_nvme_io": 0, 00:14:52.067 "transports": [ 00:14:52.067 { 00:14:52.067 "trtype": "RDMA", 00:14:52.067 "pending_data_buffer": 0, 00:14:52.067 "devices": [ 00:14:52.067 { 00:14:52.067 "name": "mlx5_0", 00:14:52.067 "polls": 897, 00:14:52.067 "idle_polls": 897, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 }, 00:14:52.067 { 00:14:52.067 "name": "mlx5_1", 00:14:52.067 "polls": 897, 00:14:52.067 "idle_polls": 897, 00:14:52.067 "completions": 0, 00:14:52.067 "requests": 0, 00:14:52.067 "request_latency": 0, 00:14:52.067 "pending_free_request": 0, 00:14:52.067 "pending_rdma_read": 0, 00:14:52.067 "pending_rdma_write": 0, 00:14:52.067 "pending_rdma_send": 0, 00:14:52.067 "total_send_wrs": 0, 00:14:52.067 "send_doorbell_updates": 0, 00:14:52.067 "total_recv_wrs": 4096, 00:14:52.067 "recv_doorbell_updates": 1 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 } 00:14:52.067 ] 00:14:52.067 }' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.067 Malloc1 00:14:52.067 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.068 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.327 [2024-07-25 07:20:24.615704] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:52.327 [2024-07-25 07:20:24.657755] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:52.327 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:52.327 could not add new controller: failed to write to nvme-fabrics device 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.327 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.328 07:20:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:53.264 07:20:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.264 07:20:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.264 07:20:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.264 07:20:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.265 07:20:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:55.169 07:20:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.545 [2024-07-25 07:20:28.739493] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:56.545 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:56.545 could not add new controller: failed to write to nvme-fabrics device 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.545 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:57.482 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.482 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:57.482 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.482 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:57.482 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:59.389 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 [2024-07-25 07:20:32.803751] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.327 07:20:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:01.265 07:20:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.265 07:20:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.265 07:20:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.265 07:20:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:01.265 07:20:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:03.835 07:20:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 [2024-07-25 07:20:36.821142] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.403 07:20:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:05.341 07:20:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.341 07:20:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.341 07:20:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.341 07:20:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:05.341 07:20:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:07.878 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 [2024-07-25 07:20:40.848903] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.447 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:09.386 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.386 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.386 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.386 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:09.386 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.922 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 [2024-07-25 07:20:44.883319] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:13.429 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.430 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.430 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.430 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:13.430 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:15.966 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 [2024-07-25 07:20:48.931133] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.535 07:20:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:17.472 07:20:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.472 07:20:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.472 07:20:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.472 07:20:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.472 07:20:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:20.005 07:20:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 [2024-07-25 07:20:52.983388] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 [2024-07-25 07:20:53.031565] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 [2024-07-25 07:20:53.083744] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.574 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 [2024-07-25 07:20:53.131884] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 [2024-07-25 07:20:53.180101] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.837 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:20.838 "tick_rate": 2500000000, 00:15:20.838 "poll_groups": [ 00:15:20.838 { 00:15:20.838 "name": "nvmf_tgt_poll_group_000", 00:15:20.838 "admin_qpairs": 2, 00:15:20.838 "io_qpairs": 27, 00:15:20.838 "current_admin_qpairs": 0, 00:15:20.838 "current_io_qpairs": 0, 00:15:20.838 "pending_bdev_io": 0, 00:15:20.838 "completed_nvme_io": 183, 00:15:20.838 "transports": [ 00:15:20.838 { 00:15:20.838 "trtype": "RDMA", 00:15:20.838 "pending_data_buffer": 0, 00:15:20.838 "devices": [ 00:15:20.838 { 00:15:20.838 "name": "mlx5_0", 00:15:20.838 "polls": 3520665, 00:15:20.838 "idle_polls": 3520251, 00:15:20.838 "completions": 473, 00:15:20.838 "requests": 236, 00:15:20.838 "request_latency": 50946274, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 417, 00:15:20.838 "send_doorbell_updates": 199, 00:15:20.838 "total_recv_wrs": 4332, 00:15:20.838 "recv_doorbell_updates": 199 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "mlx5_1", 00:15:20.838 "polls": 3520665, 00:15:20.838 "idle_polls": 3520665, 00:15:20.838 "completions": 0, 00:15:20.838 "requests": 0, 00:15:20.838 "request_latency": 0, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 0, 00:15:20.838 "send_doorbell_updates": 0, 00:15:20.838 "total_recv_wrs": 4096, 00:15:20.838 "recv_doorbell_updates": 1 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "nvmf_tgt_poll_group_001", 00:15:20.838 "admin_qpairs": 2, 00:15:20.838 "io_qpairs": 26, 00:15:20.838 "current_admin_qpairs": 0, 00:15:20.838 "current_io_qpairs": 0, 00:15:20.838 "pending_bdev_io": 0, 00:15:20.838 "completed_nvme_io": 119, 00:15:20.838 "transports": [ 00:15:20.838 { 00:15:20.838 "trtype": "RDMA", 00:15:20.838 "pending_data_buffer": 0, 00:15:20.838 "devices": [ 00:15:20.838 { 00:15:20.838 "name": "mlx5_0", 00:15:20.838 "polls": 3555905, 00:15:20.838 "idle_polls": 3555599, 00:15:20.838 "completions": 344, 00:15:20.838 "requests": 172, 00:15:20.838 "request_latency": 35104682, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 290, 00:15:20.838 "send_doorbell_updates": 147, 00:15:20.838 "total_recv_wrs": 4268, 00:15:20.838 "recv_doorbell_updates": 148 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "mlx5_1", 00:15:20.838 "polls": 3555905, 00:15:20.838 "idle_polls": 3555905, 00:15:20.838 "completions": 0, 00:15:20.838 "requests": 0, 00:15:20.838 "request_latency": 0, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 0, 00:15:20.838 "send_doorbell_updates": 0, 00:15:20.838 "total_recv_wrs": 4096, 00:15:20.838 "recv_doorbell_updates": 1 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "nvmf_tgt_poll_group_002", 00:15:20.838 "admin_qpairs": 1, 00:15:20.838 "io_qpairs": 26, 00:15:20.838 "current_admin_qpairs": 0, 00:15:20.838 "current_io_qpairs": 0, 00:15:20.838 "pending_bdev_io": 0, 00:15:20.838 "completed_nvme_io": 76, 00:15:20.838 "transports": [ 00:15:20.838 { 00:15:20.838 "trtype": "RDMA", 00:15:20.838 "pending_data_buffer": 0, 00:15:20.838 "devices": [ 00:15:20.838 { 00:15:20.838 "name": "mlx5_0", 00:15:20.838 "polls": 3623565, 00:15:20.838 "idle_polls": 3623379, 00:15:20.838 "completions": 207, 00:15:20.838 "requests": 103, 00:15:20.838 "request_latency": 19585198, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 166, 00:15:20.838 "send_doorbell_updates": 92, 00:15:20.838 "total_recv_wrs": 4199, 00:15:20.838 "recv_doorbell_updates": 92 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "mlx5_1", 00:15:20.838 "polls": 3623565, 00:15:20.838 "idle_polls": 3623565, 00:15:20.838 "completions": 0, 00:15:20.838 "requests": 0, 00:15:20.838 "request_latency": 0, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 0, 00:15:20.838 "send_doorbell_updates": 0, 00:15:20.838 "total_recv_wrs": 4096, 00:15:20.838 "recv_doorbell_updates": 1 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "nvmf_tgt_poll_group_003", 00:15:20.838 "admin_qpairs": 2, 00:15:20.838 "io_qpairs": 26, 00:15:20.838 "current_admin_qpairs": 0, 00:15:20.838 "current_io_qpairs": 0, 00:15:20.838 "pending_bdev_io": 0, 00:15:20.838 "completed_nvme_io": 77, 00:15:20.838 "transports": [ 00:15:20.838 { 00:15:20.838 "trtype": "RDMA", 00:15:20.838 "pending_data_buffer": 0, 00:15:20.838 "devices": [ 00:15:20.838 { 00:15:20.838 "name": "mlx5_0", 00:15:20.838 "polls": 2831185, 00:15:20.838 "idle_polls": 2830942, 00:15:20.838 "completions": 264, 00:15:20.838 "requests": 132, 00:15:20.838 "request_latency": 22728490, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 209, 00:15:20.838 "send_doorbell_updates": 119, 00:15:20.838 "total_recv_wrs": 4228, 00:15:20.838 "recv_doorbell_updates": 120 00:15:20.838 }, 00:15:20.838 { 00:15:20.838 "name": "mlx5_1", 00:15:20.838 "polls": 2831185, 00:15:20.838 "idle_polls": 2831185, 00:15:20.838 "completions": 0, 00:15:20.838 "requests": 0, 00:15:20.838 "request_latency": 0, 00:15:20.838 "pending_free_request": 0, 00:15:20.838 "pending_rdma_read": 0, 00:15:20.838 "pending_rdma_write": 0, 00:15:20.838 "pending_rdma_send": 0, 00:15:20.838 "total_send_wrs": 0, 00:15:20.838 "send_doorbell_updates": 0, 00:15:20.838 "total_recv_wrs": 4096, 00:15:20.838 "recv_doorbell_updates": 1 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 } 00:15:20.838 ] 00:15:20.838 }' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:20.838 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 128364644 > 0 )) 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:21.160 rmmod nvme_rdma 00:15:21.160 rmmod nvme_fabrics 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2647383 ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2647383 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2647383 ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2647383 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2647383 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2647383' 00:15:21.160 killing process with pid 2647383 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2647383 00:15:21.160 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2647383 00:15:21.418 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.418 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:21.418 00:15:21.419 real 0m39.100s 00:15:21.419 user 2m4.189s 00:15:21.419 sys 0m7.987s 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 ************************************ 00:15:21.419 END TEST nvmf_rpc 00:15:21.419 ************************************ 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 ************************************ 00:15:21.419 START TEST nvmf_invalid 00:15:21.419 ************************************ 00:15:21.419 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:21.678 * Looking for test storage... 00:15:21.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.678 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.678 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:21.678 07:20:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.678 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.679 07:20:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:29.803 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.803 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:29.803 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:29.804 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:29.804 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:29.804 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.804 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:29.804 altname enp217s0f0np0 00:15:29.804 altname ens818f0np0 00:15:29.804 inet 192.168.100.8/24 scope global mlx_0_0 00:15:29.804 valid_lft forever preferred_lft forever 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:29.804 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.804 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:29.804 altname enp217s0f1np1 00:15:29.804 altname ens818f1np1 00:15:29.804 inet 192.168.100.9/24 scope global mlx_0_1 00:15:29.804 valid_lft forever preferred_lft forever 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:29.804 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:29.805 192.168.100.9' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:29.805 192.168.100.9' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:29.805 192.168.100.9' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2656865 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2656865 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2656865 ']' 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.805 07:21:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.805 [2024-07-25 07:21:01.954462] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:15:29.805 [2024-07-25 07:21:01.954511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.805 [2024-07-25 07:21:02.037865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.805 [2024-07-25 07:21:02.111892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.805 [2024-07-25 07:21:02.111929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.805 [2024-07-25 07:21:02.111939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.805 [2024-07-25 07:21:02.111948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.805 [2024-07-25 07:21:02.111971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.805 [2024-07-25 07:21:02.112012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.805 [2024-07-25 07:21:02.112105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.805 [2024-07-25 07:21:02.112188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.805 [2024-07-25 07:21:02.112190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:30.372 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6011 00:15:30.631 [2024-07-25 07:21:02.977333] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:30.631 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:30.631 { 00:15:30.631 "nqn": "nqn.2016-06.io.spdk:cnode6011", 00:15:30.631 "tgt_name": "foobar", 00:15:30.631 "method": "nvmf_create_subsystem", 00:15:30.631 "req_id": 1 00:15:30.631 } 00:15:30.631 Got JSON-RPC error response 00:15:30.631 response: 00:15:30.631 { 00:15:30.631 "code": -32603, 00:15:30.631 "message": "Unable to find target foobar" 00:15:30.631 }' 00:15:30.631 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:30.631 { 00:15:30.631 "nqn": "nqn.2016-06.io.spdk:cnode6011", 00:15:30.631 "tgt_name": "foobar", 00:15:30.631 "method": "nvmf_create_subsystem", 00:15:30.631 "req_id": 1 00:15:30.631 } 00:15:30.631 Got JSON-RPC error response 00:15:30.631 response: 00:15:30.631 { 00:15:30.631 "code": -32603, 00:15:30.631 "message": "Unable to find target foobar" 00:15:30.631 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:30.631 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:30.631 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32379 00:15:30.890 [2024-07-25 07:21:03.174041] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32379: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:30.890 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:30.890 { 00:15:30.890 "nqn": "nqn.2016-06.io.spdk:cnode32379", 00:15:30.890 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:30.890 "method": "nvmf_create_subsystem", 00:15:30.890 "req_id": 1 00:15:30.890 } 00:15:30.890 Got JSON-RPC error response 00:15:30.890 response: 00:15:30.890 { 00:15:30.890 "code": -32602, 00:15:30.890 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:30.890 }' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:30.891 { 00:15:30.891 "nqn": "nqn.2016-06.io.spdk:cnode32379", 00:15:30.891 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:30.891 "method": "nvmf_create_subsystem", 00:15:30.891 "req_id": 1 00:15:30.891 } 00:15:30.891 Got JSON-RPC error response 00:15:30.891 response: 00:15:30.891 { 00:15:30.891 "code": -32602, 00:15:30.891 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:30.891 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15960 00:15:30.891 [2024-07-25 07:21:03.366598] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15960: invalid model number 'SPDK_Controller' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:30.891 { 00:15:30.891 "nqn": "nqn.2016-06.io.spdk:cnode15960", 00:15:30.891 "model_number": "SPDK_Controller\u001f", 00:15:30.891 "method": "nvmf_create_subsystem", 00:15:30.891 "req_id": 1 00:15:30.891 } 00:15:30.891 Got JSON-RPC error response 00:15:30.891 response: 00:15:30.891 { 00:15:30.891 "code": -32602, 00:15:30.891 "message": "Invalid MN SPDK_Controller\u001f" 00:15:30.891 }' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:30.891 { 00:15:30.891 "nqn": "nqn.2016-06.io.spdk:cnode15960", 00:15:30.891 "model_number": "SPDK_Controller\u001f", 00:15:30.891 "method": "nvmf_create_subsystem", 00:15:30.891 "req_id": 1 00:15:30.891 } 00:15:30.891 Got JSON-RPC error response 00:15:30.891 response: 00:15:30.891 { 00:15:30.891 "code": -32602, 00:15:30.891 "message": "Invalid MN SPDK_Controller\u001f" 00:15:30.891 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:30.891 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:31.150 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vy.Q5/6}FJ,T1VVJ.tU'\''T' 00:15:31.151 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vy.Q5/6}FJ,T1VVJ.tU'\''T' nqn.2016-06.io.spdk:cnode2602 00:15:31.411 [2024-07-25 07:21:03.731778] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2602: invalid serial number 'vy.Q5/6}FJ,T1VVJ.tU'T' 00:15:31.411 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:31.411 { 00:15:31.412 "nqn": "nqn.2016-06.io.spdk:cnode2602", 00:15:31.412 "serial_number": "vy.Q5/6}FJ,T1VVJ.tU'\''T", 00:15:31.412 "method": "nvmf_create_subsystem", 00:15:31.412 "req_id": 1 00:15:31.412 } 00:15:31.412 Got JSON-RPC error response 00:15:31.412 response: 00:15:31.412 { 00:15:31.412 "code": -32602, 00:15:31.412 "message": "Invalid SN vy.Q5/6}FJ,T1VVJ.tU'\''T" 00:15:31.412 }' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:31.412 { 00:15:31.412 "nqn": "nqn.2016-06.io.spdk:cnode2602", 00:15:31.412 "serial_number": "vy.Q5/6}FJ,T1VVJ.tU'T", 00:15:31.412 "method": "nvmf_create_subsystem", 00:15:31.412 "req_id": 1 00:15:31.412 } 00:15:31.412 Got JSON-RPC error response 00:15:31.412 response: 00:15:31.412 { 00:15:31.412 "code": -32602, 00:15:31.412 "message": "Invalid SN vy.Q5/6}FJ,T1VVJ.tU'T" 00:15:31.412 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:31.412 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.413 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.672 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z#6=J\A'\''NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'\''a' 00:15:31.673 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Z#6=J\A'\''NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'\''a' nqn.2016-06.io.spdk:cnode29293 00:15:31.932 [2024-07-25 07:21:04.241466] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29293: invalid model number 'Z#6=J\A'NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'a' 00:15:31.932 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:31.932 { 00:15:31.932 "nqn": "nqn.2016-06.io.spdk:cnode29293", 00:15:31.932 "model_number": "Z#6=J\\A'\''NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'\''a", 00:15:31.932 "method": "nvmf_create_subsystem", 00:15:31.932 "req_id": 1 00:15:31.932 } 00:15:31.932 Got JSON-RPC error response 00:15:31.932 response: 00:15:31.932 { 00:15:31.932 "code": -32602, 00:15:31.932 "message": "Invalid MN Z#6=J\\A'\''NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'\''a" 00:15:31.932 }' 00:15:31.932 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:31.932 { 00:15:31.932 "nqn": "nqn.2016-06.io.spdk:cnode29293", 00:15:31.932 "model_number": "Z#6=J\\A'NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'a", 00:15:31.932 "method": "nvmf_create_subsystem", 00:15:31.932 "req_id": 1 00:15:31.932 } 00:15:31.932 Got JSON-RPC error response 00:15:31.932 response: 00:15:31.932 { 00:15:31.932 "code": -32602, 00:15:31.932 "message": "Invalid MN Z#6=J\\A'NQlQlAO,l:OH?LaVQIFSOg{M@Su8!KN'a" 00:15:31.932 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:31.932 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:31.932 [2024-07-25 07:21:04.460400] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd506f0/0xd54be0) succeed. 00:15:32.191 [2024-07-25 07:21:04.469553] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd51d30/0xd96270) succeed. 00:15:32.191 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:32.450 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:32.450 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:32.450 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:32.450 192.168.100.9' 00:15:32.450 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:32.450 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:32.450 [2024-07-25 07:21:04.966726] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:32.709 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:32.709 { 00:15:32.709 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.709 "listen_address": { 00:15:32.709 "trtype": "rdma", 00:15:32.709 "traddr": "192.168.100.8", 00:15:32.709 "trsvcid": "4421" 00:15:32.709 }, 00:15:32.709 "method": "nvmf_subsystem_remove_listener", 00:15:32.709 "req_id": 1 00:15:32.709 } 00:15:32.709 Got JSON-RPC error response 00:15:32.709 response: 00:15:32.709 { 00:15:32.709 "code": -32602, 00:15:32.709 "message": "Invalid parameters" 00:15:32.709 }' 00:15:32.709 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:32.709 { 00:15:32.709 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.709 "listen_address": { 00:15:32.709 "trtype": "rdma", 00:15:32.709 "traddr": "192.168.100.8", 00:15:32.709 "trsvcid": "4421" 00:15:32.709 }, 00:15:32.709 "method": "nvmf_subsystem_remove_listener", 00:15:32.709 "req_id": 1 00:15:32.709 } 00:15:32.709 Got JSON-RPC error response 00:15:32.709 response: 00:15:32.709 { 00:15:32.709 "code": -32602, 00:15:32.709 "message": "Invalid parameters" 00:15:32.709 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:32.709 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27989 -i 0 00:15:32.709 [2024-07-25 07:21:05.151349] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27989: invalid cntlid range [0-65519] 00:15:32.709 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:32.709 { 00:15:32.709 "nqn": "nqn.2016-06.io.spdk:cnode27989", 00:15:32.709 "min_cntlid": 0, 00:15:32.709 "method": "nvmf_create_subsystem", 00:15:32.709 "req_id": 1 00:15:32.709 } 00:15:32.709 Got JSON-RPC error response 00:15:32.709 response: 00:15:32.709 { 00:15:32.709 "code": -32602, 00:15:32.709 "message": "Invalid cntlid range [0-65519]" 00:15:32.709 }' 00:15:32.709 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:32.709 { 00:15:32.709 "nqn": "nqn.2016-06.io.spdk:cnode27989", 00:15:32.709 "min_cntlid": 0, 00:15:32.709 "method": "nvmf_create_subsystem", 00:15:32.709 "req_id": 1 00:15:32.709 } 00:15:32.709 Got JSON-RPC error response 00:15:32.709 response: 00:15:32.709 { 00:15:32.709 "code": -32602, 00:15:32.709 "message": "Invalid cntlid range [0-65519]" 00:15:32.709 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.709 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9855 -i 65520 00:15:32.968 [2024-07-25 07:21:05.344038] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9855: invalid cntlid range [65520-65519] 00:15:32.968 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:32.968 { 00:15:32.968 "nqn": "nqn.2016-06.io.spdk:cnode9855", 00:15:32.968 "min_cntlid": 65520, 00:15:32.968 "method": "nvmf_create_subsystem", 00:15:32.968 "req_id": 1 00:15:32.968 } 00:15:32.968 Got JSON-RPC error response 00:15:32.968 response: 00:15:32.968 { 00:15:32.968 "code": -32602, 00:15:32.968 "message": "Invalid cntlid range [65520-65519]" 00:15:32.968 }' 00:15:32.968 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:32.968 { 00:15:32.968 "nqn": "nqn.2016-06.io.spdk:cnode9855", 00:15:32.968 "min_cntlid": 65520, 00:15:32.968 "method": "nvmf_create_subsystem", 00:15:32.968 "req_id": 1 00:15:32.968 } 00:15:32.968 Got JSON-RPC error response 00:15:32.968 response: 00:15:32.968 { 00:15:32.968 "code": -32602, 00:15:32.968 "message": "Invalid cntlid range [65520-65519]" 00:15:32.968 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.968 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1044 -I 0 00:15:33.228 [2024-07-25 07:21:05.536752] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1044: invalid cntlid range [1-0] 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:33.228 { 00:15:33.228 "nqn": "nqn.2016-06.io.spdk:cnode1044", 00:15:33.228 "max_cntlid": 0, 00:15:33.228 "method": "nvmf_create_subsystem", 00:15:33.228 "req_id": 1 00:15:33.228 } 00:15:33.228 Got JSON-RPC error response 00:15:33.228 response: 00:15:33.228 { 00:15:33.228 "code": -32602, 00:15:33.228 "message": "Invalid cntlid range [1-0]" 00:15:33.228 }' 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:33.228 { 00:15:33.228 "nqn": "nqn.2016-06.io.spdk:cnode1044", 00:15:33.228 "max_cntlid": 0, 00:15:33.228 "method": "nvmf_create_subsystem", 00:15:33.228 "req_id": 1 00:15:33.228 } 00:15:33.228 Got JSON-RPC error response 00:15:33.228 response: 00:15:33.228 { 00:15:33.228 "code": -32602, 00:15:33.228 "message": "Invalid cntlid range [1-0]" 00:15:33.228 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7251 -I 65520 00:15:33.228 [2024-07-25 07:21:05.717388] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7251: invalid cntlid range [1-65520] 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:33.228 { 00:15:33.228 "nqn": "nqn.2016-06.io.spdk:cnode7251", 00:15:33.228 "max_cntlid": 65520, 00:15:33.228 "method": "nvmf_create_subsystem", 00:15:33.228 "req_id": 1 00:15:33.228 } 00:15:33.228 Got JSON-RPC error response 00:15:33.228 response: 00:15:33.228 { 00:15:33.228 "code": -32602, 00:15:33.228 "message": "Invalid cntlid range [1-65520]" 00:15:33.228 }' 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:33.228 { 00:15:33.228 "nqn": "nqn.2016-06.io.spdk:cnode7251", 00:15:33.228 "max_cntlid": 65520, 00:15:33.228 "method": "nvmf_create_subsystem", 00:15:33.228 "req_id": 1 00:15:33.228 } 00:15:33.228 Got JSON-RPC error response 00:15:33.228 response: 00:15:33.228 { 00:15:33.228 "code": -32602, 00:15:33.228 "message": "Invalid cntlid range [1-65520]" 00:15:33.228 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.228 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21109 -i 6 -I 5 00:15:33.486 [2024-07-25 07:21:05.898032] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21109: invalid cntlid range [6-5] 00:15:33.487 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:33.487 { 00:15:33.487 "nqn": "nqn.2016-06.io.spdk:cnode21109", 00:15:33.487 "min_cntlid": 6, 00:15:33.487 "max_cntlid": 5, 00:15:33.487 "method": "nvmf_create_subsystem", 00:15:33.487 "req_id": 1 00:15:33.487 } 00:15:33.487 Got JSON-RPC error response 00:15:33.487 response: 00:15:33.487 { 00:15:33.487 "code": -32602, 00:15:33.487 "message": "Invalid cntlid range [6-5]" 00:15:33.487 }' 00:15:33.487 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:33.487 { 00:15:33.487 "nqn": "nqn.2016-06.io.spdk:cnode21109", 00:15:33.487 "min_cntlid": 6, 00:15:33.487 "max_cntlid": 5, 00:15:33.487 "method": "nvmf_create_subsystem", 00:15:33.487 "req_id": 1 00:15:33.487 } 00:15:33.487 Got JSON-RPC error response 00:15:33.487 response: 00:15:33.487 { 00:15:33.487 "code": -32602, 00:15:33.487 "message": "Invalid cntlid range [6-5]" 00:15:33.487 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.487 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:33.745 { 00:15:33.745 "name": "foobar", 00:15:33.745 "method": "nvmf_delete_target", 00:15:33.745 "req_id": 1 00:15:33.745 } 00:15:33.745 Got JSON-RPC error response 00:15:33.745 response: 00:15:33.745 { 00:15:33.745 "code": -32602, 00:15:33.745 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:33.745 }' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:33.745 { 00:15:33.745 "name": "foobar", 00:15:33.745 "method": "nvmf_delete_target", 00:15:33.745 "req_id": 1 00:15:33.745 } 00:15:33.745 Got JSON-RPC error response 00:15:33.745 response: 00:15:33.745 { 00:15:33.745 "code": -32602, 00:15:33.745 "message": "The specified target doesn't exist, cannot delete it." 00:15:33.745 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:33.745 rmmod nvme_rdma 00:15:33.745 rmmod nvme_fabrics 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2656865 ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2656865 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2656865 ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2656865 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2656865 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2656865' 00:15:33.745 killing process with pid 2656865 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2656865 00:15:33.745 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2656865 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:34.004 00:15:34.004 real 0m12.491s 00:15:34.004 user 0m21.291s 00:15:34.004 sys 0m7.312s 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.004 ************************************ 00:15:34.004 END TEST nvmf_invalid 00:15:34.004 ************************************ 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.004 ************************************ 00:15:34.004 START TEST nvmf_connect_stress 00:15:34.004 ************************************ 00:15:34.004 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:34.263 * Looking for test storage... 00:15:34.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.263 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.264 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.264 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.264 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.264 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.264 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.382 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.382 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.382 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:42.383 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:42.383 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:42.383 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:42.383 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:42.383 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:42.384 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:42.384 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:42.384 altname enp217s0f0np0 00:15:42.384 altname ens818f0np0 00:15:42.384 inet 192.168.100.8/24 scope global mlx_0_0 00:15:42.384 valid_lft forever preferred_lft forever 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:42.384 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:42.384 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:42.384 altname enp217s0f1np1 00:15:42.384 altname ens818f1np1 00:15:42.384 inet 192.168.100.9/24 scope global mlx_0_1 00:15:42.384 valid_lft forever preferred_lft forever 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:42.384 192.168.100.9' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:42.384 192.168.100.9' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:42.384 192.168.100.9' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2662180 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2662180 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2662180 ']' 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.384 07:21:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:42.384 [2024-07-25 07:21:14.371462] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:15:42.384 [2024-07-25 07:21:14.371511] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.384 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.384 [2024-07-25 07:21:14.455020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.384 [2024-07-25 07:21:14.526734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.384 [2024-07-25 07:21:14.526773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.384 [2024-07-25 07:21:14.526782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.384 [2024-07-25 07:21:14.526790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.384 [2024-07-25 07:21:14.526797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.384 [2024-07-25 07:21:14.526903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.384 [2024-07-25 07:21:14.527004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.384 [2024-07-25 07:21:14.527006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 [2024-07-25 07:21:15.252622] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2419500/0x241d9f0) succeed. 00:15:42.953 [2024-07-25 07:21:15.261872] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x241aaa0/0x245f080) succeed. 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 [2024-07-25 07:21:15.377473] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 NULL1 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2662259 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.953 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:42.954 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.213 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.472 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.472 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:43.472 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.472 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.472 07:21:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.731 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.731 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:43.731 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.731 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.731 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.054 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.054 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:44.054 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.054 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.054 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.314 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.314 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:44.314 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.314 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.314 07:21:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.882 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.882 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:44.882 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.882 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.882 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.141 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.141 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:45.141 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.141 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.141 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.400 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.400 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:45.400 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.400 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.400 07:21:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.660 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.660 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:45.660 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.660 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.660 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.919 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.919 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:45.919 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.919 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.919 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.487 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.487 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:46.487 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.487 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.487 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.746 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.746 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:46.746 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.746 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.746 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.005 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.005 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:47.005 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.005 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.005 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.264 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.264 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:47.264 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.264 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.264 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.847 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.847 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:47.847 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.847 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.847 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.106 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.106 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:48.106 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.106 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.106 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.364 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.364 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:48.364 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.364 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.364 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.623 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.623 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:48.623 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.623 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.623 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.882 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.882 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:48.882 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.882 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.882 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.450 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.450 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:49.450 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.450 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.450 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.708 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.708 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:49.708 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.708 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.708 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.967 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.967 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:49.967 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.967 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.967 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.226 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:50.226 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.226 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.226 07:21:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.485 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:50.485 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.485 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.485 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.074 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.074 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:51.074 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.074 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.074 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.333 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.333 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:51.333 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.333 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.333 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.590 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.591 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:51.591 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.591 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.591 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.849 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.849 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:51.849 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.849 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.849 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.417 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.417 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:52.417 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.417 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.417 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.675 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.675 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:52.675 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.675 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.675 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.935 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.935 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:52.935 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.935 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.935 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.194 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662259 00:15:53.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2662259) - No such process 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2662259 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:53.194 rmmod nvme_rdma 00:15:53.194 rmmod nvme_fabrics 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2662180 ']' 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2662180 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2662180 ']' 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2662180 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.194 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2662180 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2662180' 00:15:53.453 killing process with pid 2662180 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2662180 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2662180 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:53.453 00:15:53.453 real 0m19.503s 00:15:53.453 user 0m41.758s 00:15:53.453 sys 0m8.616s 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.453 07:21:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 ************************************ 00:15:53.453 END TEST nvmf_connect_stress 00:15:53.453 ************************************ 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 ************************************ 00:15:53.713 START TEST nvmf_fused_ordering 00:15:53.713 ************************************ 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:53.713 * Looking for test storage... 00:15:53.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.713 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.714 07:21:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.840 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:01.841 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:01.841 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:01.841 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:01.841 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:01.841 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:01.842 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:01.842 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:01.842 altname enp217s0f0np0 00:16:01.842 altname ens818f0np0 00:16:01.842 inet 192.168.100.8/24 scope global mlx_0_0 00:16:01.842 valid_lft forever preferred_lft forever 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:01.842 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:01.842 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:01.842 altname enp217s0f1np1 00:16:01.842 altname ens818f1np1 00:16:01.842 inet 192.168.100.9/24 scope global mlx_0_1 00:16:01.842 valid_lft forever preferred_lft forever 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:01.842 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:01.842 192.168.100.9' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:01.842 192.168.100.9' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:01.842 192.168.100.9' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2667996 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2667996 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2667996 ']' 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.842 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.842 [2024-07-25 07:21:34.109411] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:16:01.842 [2024-07-25 07:21:34.109467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.842 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.842 [2024-07-25 07:21:34.192738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.842 [2024-07-25 07:21:34.261194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.842 [2024-07-25 07:21:34.261235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.842 [2024-07-25 07:21:34.261245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.842 [2024-07-25 07:21:34.261254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.842 [2024-07-25 07:21:34.261261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.842 [2024-07-25 07:21:34.261285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.434 07:21:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 [2024-07-25 07:21:34.973287] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1770e90/0x1775380) succeed. 00:16:02.694 [2024-07-25 07:21:34.982335] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1772390/0x17b6a10) succeed. 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 [2024-07-25 07:21:35.054196] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 NULL1 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.694 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:02.694 [2024-07-25 07:21:35.111156] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:16:02.694 [2024-07-25 07:21:35.111206] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668277 ] 00:16:02.694 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.954 Attached to nqn.2016-06.io.spdk:cnode1 00:16:02.954 Namespace ID: 1 size: 1GB 00:16:02.954 fused_ordering(0) 00:16:02.954 fused_ordering(1) 00:16:02.954 fused_ordering(2) 00:16:02.954 fused_ordering(3) 00:16:02.954 fused_ordering(4) 00:16:02.954 fused_ordering(5) 00:16:02.954 fused_ordering(6) 00:16:02.954 fused_ordering(7) 00:16:02.954 fused_ordering(8) 00:16:02.954 fused_ordering(9) 00:16:02.954 fused_ordering(10) 00:16:02.954 fused_ordering(11) 00:16:02.954 fused_ordering(12) 00:16:02.954 fused_ordering(13) 00:16:02.954 fused_ordering(14) 00:16:02.954 fused_ordering(15) 00:16:02.954 fused_ordering(16) 00:16:02.954 fused_ordering(17) 00:16:02.954 fused_ordering(18) 00:16:02.954 fused_ordering(19) 00:16:02.954 fused_ordering(20) 00:16:02.954 fused_ordering(21) 00:16:02.954 fused_ordering(22) 00:16:02.954 fused_ordering(23) 00:16:02.954 fused_ordering(24) 00:16:02.954 fused_ordering(25) 00:16:02.954 fused_ordering(26) 00:16:02.954 fused_ordering(27) 00:16:02.954 fused_ordering(28) 00:16:02.954 fused_ordering(29) 00:16:02.954 fused_ordering(30) 00:16:02.954 fused_ordering(31) 00:16:02.954 fused_ordering(32) 00:16:02.954 fused_ordering(33) 00:16:02.954 fused_ordering(34) 00:16:02.954 fused_ordering(35) 00:16:02.954 fused_ordering(36) 00:16:02.954 fused_ordering(37) 00:16:02.954 fused_ordering(38) 00:16:02.954 fused_ordering(39) 00:16:02.954 fused_ordering(40) 00:16:02.954 fused_ordering(41) 00:16:02.954 fused_ordering(42) 00:16:02.954 fused_ordering(43) 00:16:02.954 fused_ordering(44) 00:16:02.954 fused_ordering(45) 00:16:02.954 fused_ordering(46) 00:16:02.954 fused_ordering(47) 00:16:02.954 fused_ordering(48) 00:16:02.954 fused_ordering(49) 00:16:02.954 fused_ordering(50) 00:16:02.954 fused_ordering(51) 00:16:02.954 fused_ordering(52) 00:16:02.954 fused_ordering(53) 00:16:02.954 fused_ordering(54) 00:16:02.954 fused_ordering(55) 00:16:02.954 fused_ordering(56) 00:16:02.954 fused_ordering(57) 00:16:02.954 fused_ordering(58) 00:16:02.954 fused_ordering(59) 00:16:02.954 fused_ordering(60) 00:16:02.954 fused_ordering(61) 00:16:02.954 fused_ordering(62) 00:16:02.954 fused_ordering(63) 00:16:02.954 fused_ordering(64) 00:16:02.954 fused_ordering(65) 00:16:02.954 fused_ordering(66) 00:16:02.954 fused_ordering(67) 00:16:02.954 fused_ordering(68) 00:16:02.954 fused_ordering(69) 00:16:02.954 fused_ordering(70) 00:16:02.954 fused_ordering(71) 00:16:02.954 fused_ordering(72) 00:16:02.954 fused_ordering(73) 00:16:02.954 fused_ordering(74) 00:16:02.954 fused_ordering(75) 00:16:02.954 fused_ordering(76) 00:16:02.954 fused_ordering(77) 00:16:02.954 fused_ordering(78) 00:16:02.954 fused_ordering(79) 00:16:02.954 fused_ordering(80) 00:16:02.954 fused_ordering(81) 00:16:02.954 fused_ordering(82) 00:16:02.954 fused_ordering(83) 00:16:02.954 fused_ordering(84) 00:16:02.954 fused_ordering(85) 00:16:02.954 fused_ordering(86) 00:16:02.954 fused_ordering(87) 00:16:02.954 fused_ordering(88) 00:16:02.954 fused_ordering(89) 00:16:02.954 fused_ordering(90) 00:16:02.954 fused_ordering(91) 00:16:02.954 fused_ordering(92) 00:16:02.954 fused_ordering(93) 00:16:02.954 fused_ordering(94) 00:16:02.954 fused_ordering(95) 00:16:02.954 fused_ordering(96) 00:16:02.954 fused_ordering(97) 00:16:02.954 fused_ordering(98) 00:16:02.954 fused_ordering(99) 00:16:02.954 fused_ordering(100) 00:16:02.954 fused_ordering(101) 00:16:02.954 fused_ordering(102) 00:16:02.954 fused_ordering(103) 00:16:02.954 fused_ordering(104) 00:16:02.954 fused_ordering(105) 00:16:02.954 fused_ordering(106) 00:16:02.954 fused_ordering(107) 00:16:02.954 fused_ordering(108) 00:16:02.954 fused_ordering(109) 00:16:02.954 fused_ordering(110) 00:16:02.954 fused_ordering(111) 00:16:02.954 fused_ordering(112) 00:16:02.954 fused_ordering(113) 00:16:02.954 fused_ordering(114) 00:16:02.954 fused_ordering(115) 00:16:02.954 fused_ordering(116) 00:16:02.954 fused_ordering(117) 00:16:02.954 fused_ordering(118) 00:16:02.954 fused_ordering(119) 00:16:02.954 fused_ordering(120) 00:16:02.954 fused_ordering(121) 00:16:02.954 fused_ordering(122) 00:16:02.954 fused_ordering(123) 00:16:02.954 fused_ordering(124) 00:16:02.954 fused_ordering(125) 00:16:02.954 fused_ordering(126) 00:16:02.954 fused_ordering(127) 00:16:02.954 fused_ordering(128) 00:16:02.954 fused_ordering(129) 00:16:02.954 fused_ordering(130) 00:16:02.954 fused_ordering(131) 00:16:02.954 fused_ordering(132) 00:16:02.954 fused_ordering(133) 00:16:02.954 fused_ordering(134) 00:16:02.954 fused_ordering(135) 00:16:02.954 fused_ordering(136) 00:16:02.954 fused_ordering(137) 00:16:02.954 fused_ordering(138) 00:16:02.954 fused_ordering(139) 00:16:02.954 fused_ordering(140) 00:16:02.954 fused_ordering(141) 00:16:02.954 fused_ordering(142) 00:16:02.954 fused_ordering(143) 00:16:02.954 fused_ordering(144) 00:16:02.954 fused_ordering(145) 00:16:02.954 fused_ordering(146) 00:16:02.954 fused_ordering(147) 00:16:02.954 fused_ordering(148) 00:16:02.954 fused_ordering(149) 00:16:02.954 fused_ordering(150) 00:16:02.954 fused_ordering(151) 00:16:02.954 fused_ordering(152) 00:16:02.954 fused_ordering(153) 00:16:02.954 fused_ordering(154) 00:16:02.954 fused_ordering(155) 00:16:02.954 fused_ordering(156) 00:16:02.954 fused_ordering(157) 00:16:02.954 fused_ordering(158) 00:16:02.954 fused_ordering(159) 00:16:02.954 fused_ordering(160) 00:16:02.954 fused_ordering(161) 00:16:02.954 fused_ordering(162) 00:16:02.954 fused_ordering(163) 00:16:02.954 fused_ordering(164) 00:16:02.954 fused_ordering(165) 00:16:02.954 fused_ordering(166) 00:16:02.954 fused_ordering(167) 00:16:02.954 fused_ordering(168) 00:16:02.954 fused_ordering(169) 00:16:02.954 fused_ordering(170) 00:16:02.954 fused_ordering(171) 00:16:02.954 fused_ordering(172) 00:16:02.954 fused_ordering(173) 00:16:02.954 fused_ordering(174) 00:16:02.954 fused_ordering(175) 00:16:02.954 fused_ordering(176) 00:16:02.954 fused_ordering(177) 00:16:02.954 fused_ordering(178) 00:16:02.954 fused_ordering(179) 00:16:02.954 fused_ordering(180) 00:16:02.954 fused_ordering(181) 00:16:02.954 fused_ordering(182) 00:16:02.954 fused_ordering(183) 00:16:02.954 fused_ordering(184) 00:16:02.954 fused_ordering(185) 00:16:02.954 fused_ordering(186) 00:16:02.954 fused_ordering(187) 00:16:02.954 fused_ordering(188) 00:16:02.954 fused_ordering(189) 00:16:02.954 fused_ordering(190) 00:16:02.954 fused_ordering(191) 00:16:02.954 fused_ordering(192) 00:16:02.954 fused_ordering(193) 00:16:02.954 fused_ordering(194) 00:16:02.954 fused_ordering(195) 00:16:02.954 fused_ordering(196) 00:16:02.954 fused_ordering(197) 00:16:02.954 fused_ordering(198) 00:16:02.954 fused_ordering(199) 00:16:02.954 fused_ordering(200) 00:16:02.954 fused_ordering(201) 00:16:02.954 fused_ordering(202) 00:16:02.954 fused_ordering(203) 00:16:02.954 fused_ordering(204) 00:16:02.954 fused_ordering(205) 00:16:02.954 fused_ordering(206) 00:16:02.954 fused_ordering(207) 00:16:02.954 fused_ordering(208) 00:16:02.954 fused_ordering(209) 00:16:02.954 fused_ordering(210) 00:16:02.954 fused_ordering(211) 00:16:02.954 fused_ordering(212) 00:16:02.954 fused_ordering(213) 00:16:02.954 fused_ordering(214) 00:16:02.954 fused_ordering(215) 00:16:02.954 fused_ordering(216) 00:16:02.954 fused_ordering(217) 00:16:02.954 fused_ordering(218) 00:16:02.954 fused_ordering(219) 00:16:02.954 fused_ordering(220) 00:16:02.954 fused_ordering(221) 00:16:02.954 fused_ordering(222) 00:16:02.954 fused_ordering(223) 00:16:02.954 fused_ordering(224) 00:16:02.954 fused_ordering(225) 00:16:02.954 fused_ordering(226) 00:16:02.954 fused_ordering(227) 00:16:02.954 fused_ordering(228) 00:16:02.954 fused_ordering(229) 00:16:02.954 fused_ordering(230) 00:16:02.954 fused_ordering(231) 00:16:02.954 fused_ordering(232) 00:16:02.954 fused_ordering(233) 00:16:02.955 fused_ordering(234) 00:16:02.955 fused_ordering(235) 00:16:02.955 fused_ordering(236) 00:16:02.955 fused_ordering(237) 00:16:02.955 fused_ordering(238) 00:16:02.955 fused_ordering(239) 00:16:02.955 fused_ordering(240) 00:16:02.955 fused_ordering(241) 00:16:02.955 fused_ordering(242) 00:16:02.955 fused_ordering(243) 00:16:02.955 fused_ordering(244) 00:16:02.955 fused_ordering(245) 00:16:02.955 fused_ordering(246) 00:16:02.955 fused_ordering(247) 00:16:02.955 fused_ordering(248) 00:16:02.955 fused_ordering(249) 00:16:02.955 fused_ordering(250) 00:16:02.955 fused_ordering(251) 00:16:02.955 fused_ordering(252) 00:16:02.955 fused_ordering(253) 00:16:02.955 fused_ordering(254) 00:16:02.955 fused_ordering(255) 00:16:02.955 fused_ordering(256) 00:16:02.955 fused_ordering(257) 00:16:02.955 fused_ordering(258) 00:16:02.955 fused_ordering(259) 00:16:02.955 fused_ordering(260) 00:16:02.955 fused_ordering(261) 00:16:02.955 fused_ordering(262) 00:16:02.955 fused_ordering(263) 00:16:02.955 fused_ordering(264) 00:16:02.955 fused_ordering(265) 00:16:02.955 fused_ordering(266) 00:16:02.955 fused_ordering(267) 00:16:02.955 fused_ordering(268) 00:16:02.955 fused_ordering(269) 00:16:02.955 fused_ordering(270) 00:16:02.955 fused_ordering(271) 00:16:02.955 fused_ordering(272) 00:16:02.955 fused_ordering(273) 00:16:02.955 fused_ordering(274) 00:16:02.955 fused_ordering(275) 00:16:02.955 fused_ordering(276) 00:16:02.955 fused_ordering(277) 00:16:02.955 fused_ordering(278) 00:16:02.955 fused_ordering(279) 00:16:02.955 fused_ordering(280) 00:16:02.955 fused_ordering(281) 00:16:02.955 fused_ordering(282) 00:16:02.955 fused_ordering(283) 00:16:02.955 fused_ordering(284) 00:16:02.955 fused_ordering(285) 00:16:02.955 fused_ordering(286) 00:16:02.955 fused_ordering(287) 00:16:02.955 fused_ordering(288) 00:16:02.955 fused_ordering(289) 00:16:02.955 fused_ordering(290) 00:16:02.955 fused_ordering(291) 00:16:02.955 fused_ordering(292) 00:16:02.955 fused_ordering(293) 00:16:02.955 fused_ordering(294) 00:16:02.955 fused_ordering(295) 00:16:02.955 fused_ordering(296) 00:16:02.955 fused_ordering(297) 00:16:02.955 fused_ordering(298) 00:16:02.955 fused_ordering(299) 00:16:02.955 fused_ordering(300) 00:16:02.955 fused_ordering(301) 00:16:02.955 fused_ordering(302) 00:16:02.955 fused_ordering(303) 00:16:02.955 fused_ordering(304) 00:16:02.955 fused_ordering(305) 00:16:02.955 fused_ordering(306) 00:16:02.955 fused_ordering(307) 00:16:02.955 fused_ordering(308) 00:16:02.955 fused_ordering(309) 00:16:02.955 fused_ordering(310) 00:16:02.955 fused_ordering(311) 00:16:02.955 fused_ordering(312) 00:16:02.955 fused_ordering(313) 00:16:02.955 fused_ordering(314) 00:16:02.955 fused_ordering(315) 00:16:02.955 fused_ordering(316) 00:16:02.955 fused_ordering(317) 00:16:02.955 fused_ordering(318) 00:16:02.955 fused_ordering(319) 00:16:02.955 fused_ordering(320) 00:16:02.955 fused_ordering(321) 00:16:02.955 fused_ordering(322) 00:16:02.955 fused_ordering(323) 00:16:02.955 fused_ordering(324) 00:16:02.955 fused_ordering(325) 00:16:02.955 fused_ordering(326) 00:16:02.955 fused_ordering(327) 00:16:02.955 fused_ordering(328) 00:16:02.955 fused_ordering(329) 00:16:02.955 fused_ordering(330) 00:16:02.955 fused_ordering(331) 00:16:02.955 fused_ordering(332) 00:16:02.955 fused_ordering(333) 00:16:02.955 fused_ordering(334) 00:16:02.955 fused_ordering(335) 00:16:02.955 fused_ordering(336) 00:16:02.955 fused_ordering(337) 00:16:02.955 fused_ordering(338) 00:16:02.955 fused_ordering(339) 00:16:02.955 fused_ordering(340) 00:16:02.955 fused_ordering(341) 00:16:02.955 fused_ordering(342) 00:16:02.955 fused_ordering(343) 00:16:02.955 fused_ordering(344) 00:16:02.955 fused_ordering(345) 00:16:02.955 fused_ordering(346) 00:16:02.955 fused_ordering(347) 00:16:02.955 fused_ordering(348) 00:16:02.955 fused_ordering(349) 00:16:02.955 fused_ordering(350) 00:16:02.955 fused_ordering(351) 00:16:02.955 fused_ordering(352) 00:16:02.955 fused_ordering(353) 00:16:02.955 fused_ordering(354) 00:16:02.955 fused_ordering(355) 00:16:02.955 fused_ordering(356) 00:16:02.955 fused_ordering(357) 00:16:02.955 fused_ordering(358) 00:16:02.955 fused_ordering(359) 00:16:02.955 fused_ordering(360) 00:16:02.955 fused_ordering(361) 00:16:02.955 fused_ordering(362) 00:16:02.955 fused_ordering(363) 00:16:02.955 fused_ordering(364) 00:16:02.955 fused_ordering(365) 00:16:02.955 fused_ordering(366) 00:16:02.955 fused_ordering(367) 00:16:02.955 fused_ordering(368) 00:16:02.955 fused_ordering(369) 00:16:02.955 fused_ordering(370) 00:16:02.955 fused_ordering(371) 00:16:02.955 fused_ordering(372) 00:16:02.955 fused_ordering(373) 00:16:02.955 fused_ordering(374) 00:16:02.955 fused_ordering(375) 00:16:02.955 fused_ordering(376) 00:16:02.955 fused_ordering(377) 00:16:02.955 fused_ordering(378) 00:16:02.955 fused_ordering(379) 00:16:02.955 fused_ordering(380) 00:16:02.955 fused_ordering(381) 00:16:02.955 fused_ordering(382) 00:16:02.955 fused_ordering(383) 00:16:02.955 fused_ordering(384) 00:16:02.955 fused_ordering(385) 00:16:02.955 fused_ordering(386) 00:16:02.955 fused_ordering(387) 00:16:02.955 fused_ordering(388) 00:16:02.955 fused_ordering(389) 00:16:02.955 fused_ordering(390) 00:16:02.955 fused_ordering(391) 00:16:02.955 fused_ordering(392) 00:16:02.955 fused_ordering(393) 00:16:02.955 fused_ordering(394) 00:16:02.955 fused_ordering(395) 00:16:02.955 fused_ordering(396) 00:16:02.955 fused_ordering(397) 00:16:02.955 fused_ordering(398) 00:16:02.955 fused_ordering(399) 00:16:02.955 fused_ordering(400) 00:16:02.955 fused_ordering(401) 00:16:02.955 fused_ordering(402) 00:16:02.955 fused_ordering(403) 00:16:02.955 fused_ordering(404) 00:16:02.955 fused_ordering(405) 00:16:02.955 fused_ordering(406) 00:16:02.955 fused_ordering(407) 00:16:02.955 fused_ordering(408) 00:16:02.955 fused_ordering(409) 00:16:02.955 fused_ordering(410) 00:16:03.215 fused_ordering(411) 00:16:03.215 fused_ordering(412) 00:16:03.215 fused_ordering(413) 00:16:03.215 fused_ordering(414) 00:16:03.215 fused_ordering(415) 00:16:03.215 fused_ordering(416) 00:16:03.215 fused_ordering(417) 00:16:03.215 fused_ordering(418) 00:16:03.215 fused_ordering(419) 00:16:03.215 fused_ordering(420) 00:16:03.215 fused_ordering(421) 00:16:03.215 fused_ordering(422) 00:16:03.215 fused_ordering(423) 00:16:03.215 fused_ordering(424) 00:16:03.215 fused_ordering(425) 00:16:03.215 fused_ordering(426) 00:16:03.215 fused_ordering(427) 00:16:03.215 fused_ordering(428) 00:16:03.215 fused_ordering(429) 00:16:03.215 fused_ordering(430) 00:16:03.215 fused_ordering(431) 00:16:03.215 fused_ordering(432) 00:16:03.215 fused_ordering(433) 00:16:03.215 fused_ordering(434) 00:16:03.215 fused_ordering(435) 00:16:03.215 fused_ordering(436) 00:16:03.215 fused_ordering(437) 00:16:03.215 fused_ordering(438) 00:16:03.215 fused_ordering(439) 00:16:03.215 fused_ordering(440) 00:16:03.215 fused_ordering(441) 00:16:03.215 fused_ordering(442) 00:16:03.215 fused_ordering(443) 00:16:03.215 fused_ordering(444) 00:16:03.215 fused_ordering(445) 00:16:03.215 fused_ordering(446) 00:16:03.215 fused_ordering(447) 00:16:03.215 fused_ordering(448) 00:16:03.215 fused_ordering(449) 00:16:03.215 fused_ordering(450) 00:16:03.215 fused_ordering(451) 00:16:03.216 fused_ordering(452) 00:16:03.216 fused_ordering(453) 00:16:03.216 fused_ordering(454) 00:16:03.216 fused_ordering(455) 00:16:03.216 fused_ordering(456) 00:16:03.216 fused_ordering(457) 00:16:03.216 fused_ordering(458) 00:16:03.216 fused_ordering(459) 00:16:03.216 fused_ordering(460) 00:16:03.216 fused_ordering(461) 00:16:03.216 fused_ordering(462) 00:16:03.216 fused_ordering(463) 00:16:03.216 fused_ordering(464) 00:16:03.216 fused_ordering(465) 00:16:03.216 fused_ordering(466) 00:16:03.216 fused_ordering(467) 00:16:03.216 fused_ordering(468) 00:16:03.216 fused_ordering(469) 00:16:03.216 fused_ordering(470) 00:16:03.216 fused_ordering(471) 00:16:03.216 fused_ordering(472) 00:16:03.216 fused_ordering(473) 00:16:03.216 fused_ordering(474) 00:16:03.216 fused_ordering(475) 00:16:03.216 fused_ordering(476) 00:16:03.216 fused_ordering(477) 00:16:03.216 fused_ordering(478) 00:16:03.216 fused_ordering(479) 00:16:03.216 fused_ordering(480) 00:16:03.216 fused_ordering(481) 00:16:03.216 fused_ordering(482) 00:16:03.216 fused_ordering(483) 00:16:03.216 fused_ordering(484) 00:16:03.216 fused_ordering(485) 00:16:03.216 fused_ordering(486) 00:16:03.216 fused_ordering(487) 00:16:03.216 fused_ordering(488) 00:16:03.216 fused_ordering(489) 00:16:03.216 fused_ordering(490) 00:16:03.216 fused_ordering(491) 00:16:03.216 fused_ordering(492) 00:16:03.216 fused_ordering(493) 00:16:03.216 fused_ordering(494) 00:16:03.216 fused_ordering(495) 00:16:03.216 fused_ordering(496) 00:16:03.216 fused_ordering(497) 00:16:03.216 fused_ordering(498) 00:16:03.216 fused_ordering(499) 00:16:03.216 fused_ordering(500) 00:16:03.216 fused_ordering(501) 00:16:03.216 fused_ordering(502) 00:16:03.216 fused_ordering(503) 00:16:03.216 fused_ordering(504) 00:16:03.216 fused_ordering(505) 00:16:03.216 fused_ordering(506) 00:16:03.216 fused_ordering(507) 00:16:03.216 fused_ordering(508) 00:16:03.216 fused_ordering(509) 00:16:03.216 fused_ordering(510) 00:16:03.216 fused_ordering(511) 00:16:03.216 fused_ordering(512) 00:16:03.216 fused_ordering(513) 00:16:03.216 fused_ordering(514) 00:16:03.216 fused_ordering(515) 00:16:03.216 fused_ordering(516) 00:16:03.216 fused_ordering(517) 00:16:03.216 fused_ordering(518) 00:16:03.216 fused_ordering(519) 00:16:03.216 fused_ordering(520) 00:16:03.216 fused_ordering(521) 00:16:03.216 fused_ordering(522) 00:16:03.216 fused_ordering(523) 00:16:03.216 fused_ordering(524) 00:16:03.216 fused_ordering(525) 00:16:03.216 fused_ordering(526) 00:16:03.216 fused_ordering(527) 00:16:03.216 fused_ordering(528) 00:16:03.216 fused_ordering(529) 00:16:03.216 fused_ordering(530) 00:16:03.216 fused_ordering(531) 00:16:03.216 fused_ordering(532) 00:16:03.216 fused_ordering(533) 00:16:03.216 fused_ordering(534) 00:16:03.216 fused_ordering(535) 00:16:03.216 fused_ordering(536) 00:16:03.216 fused_ordering(537) 00:16:03.216 fused_ordering(538) 00:16:03.216 fused_ordering(539) 00:16:03.216 fused_ordering(540) 00:16:03.216 fused_ordering(541) 00:16:03.216 fused_ordering(542) 00:16:03.216 fused_ordering(543) 00:16:03.216 fused_ordering(544) 00:16:03.216 fused_ordering(545) 00:16:03.216 fused_ordering(546) 00:16:03.216 fused_ordering(547) 00:16:03.216 fused_ordering(548) 00:16:03.216 fused_ordering(549) 00:16:03.216 fused_ordering(550) 00:16:03.216 fused_ordering(551) 00:16:03.216 fused_ordering(552) 00:16:03.216 fused_ordering(553) 00:16:03.216 fused_ordering(554) 00:16:03.216 fused_ordering(555) 00:16:03.216 fused_ordering(556) 00:16:03.216 fused_ordering(557) 00:16:03.216 fused_ordering(558) 00:16:03.216 fused_ordering(559) 00:16:03.216 fused_ordering(560) 00:16:03.216 fused_ordering(561) 00:16:03.216 fused_ordering(562) 00:16:03.216 fused_ordering(563) 00:16:03.216 fused_ordering(564) 00:16:03.216 fused_ordering(565) 00:16:03.216 fused_ordering(566) 00:16:03.216 fused_ordering(567) 00:16:03.216 fused_ordering(568) 00:16:03.216 fused_ordering(569) 00:16:03.216 fused_ordering(570) 00:16:03.216 fused_ordering(571) 00:16:03.216 fused_ordering(572) 00:16:03.216 fused_ordering(573) 00:16:03.216 fused_ordering(574) 00:16:03.216 fused_ordering(575) 00:16:03.216 fused_ordering(576) 00:16:03.216 fused_ordering(577) 00:16:03.216 fused_ordering(578) 00:16:03.216 fused_ordering(579) 00:16:03.216 fused_ordering(580) 00:16:03.216 fused_ordering(581) 00:16:03.216 fused_ordering(582) 00:16:03.216 fused_ordering(583) 00:16:03.216 fused_ordering(584) 00:16:03.216 fused_ordering(585) 00:16:03.216 fused_ordering(586) 00:16:03.216 fused_ordering(587) 00:16:03.216 fused_ordering(588) 00:16:03.216 fused_ordering(589) 00:16:03.216 fused_ordering(590) 00:16:03.216 fused_ordering(591) 00:16:03.216 fused_ordering(592) 00:16:03.216 fused_ordering(593) 00:16:03.216 fused_ordering(594) 00:16:03.216 fused_ordering(595) 00:16:03.216 fused_ordering(596) 00:16:03.216 fused_ordering(597) 00:16:03.216 fused_ordering(598) 00:16:03.216 fused_ordering(599) 00:16:03.216 fused_ordering(600) 00:16:03.216 fused_ordering(601) 00:16:03.216 fused_ordering(602) 00:16:03.216 fused_ordering(603) 00:16:03.216 fused_ordering(604) 00:16:03.216 fused_ordering(605) 00:16:03.216 fused_ordering(606) 00:16:03.216 fused_ordering(607) 00:16:03.216 fused_ordering(608) 00:16:03.216 fused_ordering(609) 00:16:03.216 fused_ordering(610) 00:16:03.216 fused_ordering(611) 00:16:03.216 fused_ordering(612) 00:16:03.216 fused_ordering(613) 00:16:03.216 fused_ordering(614) 00:16:03.216 fused_ordering(615) 00:16:03.216 fused_ordering(616) 00:16:03.216 fused_ordering(617) 00:16:03.216 fused_ordering(618) 00:16:03.216 fused_ordering(619) 00:16:03.216 fused_ordering(620) 00:16:03.216 fused_ordering(621) 00:16:03.216 fused_ordering(622) 00:16:03.216 fused_ordering(623) 00:16:03.216 fused_ordering(624) 00:16:03.216 fused_ordering(625) 00:16:03.216 fused_ordering(626) 00:16:03.216 fused_ordering(627) 00:16:03.216 fused_ordering(628) 00:16:03.216 fused_ordering(629) 00:16:03.216 fused_ordering(630) 00:16:03.216 fused_ordering(631) 00:16:03.216 fused_ordering(632) 00:16:03.216 fused_ordering(633) 00:16:03.216 fused_ordering(634) 00:16:03.216 fused_ordering(635) 00:16:03.216 fused_ordering(636) 00:16:03.216 fused_ordering(637) 00:16:03.216 fused_ordering(638) 00:16:03.216 fused_ordering(639) 00:16:03.216 fused_ordering(640) 00:16:03.216 fused_ordering(641) 00:16:03.216 fused_ordering(642) 00:16:03.216 fused_ordering(643) 00:16:03.216 fused_ordering(644) 00:16:03.216 fused_ordering(645) 00:16:03.216 fused_ordering(646) 00:16:03.216 fused_ordering(647) 00:16:03.216 fused_ordering(648) 00:16:03.216 fused_ordering(649) 00:16:03.216 fused_ordering(650) 00:16:03.216 fused_ordering(651) 00:16:03.216 fused_ordering(652) 00:16:03.216 fused_ordering(653) 00:16:03.216 fused_ordering(654) 00:16:03.216 fused_ordering(655) 00:16:03.216 fused_ordering(656) 00:16:03.216 fused_ordering(657) 00:16:03.216 fused_ordering(658) 00:16:03.216 fused_ordering(659) 00:16:03.216 fused_ordering(660) 00:16:03.216 fused_ordering(661) 00:16:03.216 fused_ordering(662) 00:16:03.216 fused_ordering(663) 00:16:03.216 fused_ordering(664) 00:16:03.216 fused_ordering(665) 00:16:03.216 fused_ordering(666) 00:16:03.216 fused_ordering(667) 00:16:03.216 fused_ordering(668) 00:16:03.216 fused_ordering(669) 00:16:03.216 fused_ordering(670) 00:16:03.216 fused_ordering(671) 00:16:03.216 fused_ordering(672) 00:16:03.216 fused_ordering(673) 00:16:03.216 fused_ordering(674) 00:16:03.216 fused_ordering(675) 00:16:03.216 fused_ordering(676) 00:16:03.216 fused_ordering(677) 00:16:03.216 fused_ordering(678) 00:16:03.216 fused_ordering(679) 00:16:03.216 fused_ordering(680) 00:16:03.216 fused_ordering(681) 00:16:03.216 fused_ordering(682) 00:16:03.216 fused_ordering(683) 00:16:03.216 fused_ordering(684) 00:16:03.216 fused_ordering(685) 00:16:03.216 fused_ordering(686) 00:16:03.216 fused_ordering(687) 00:16:03.216 fused_ordering(688) 00:16:03.216 fused_ordering(689) 00:16:03.216 fused_ordering(690) 00:16:03.216 fused_ordering(691) 00:16:03.216 fused_ordering(692) 00:16:03.216 fused_ordering(693) 00:16:03.216 fused_ordering(694) 00:16:03.216 fused_ordering(695) 00:16:03.216 fused_ordering(696) 00:16:03.216 fused_ordering(697) 00:16:03.216 fused_ordering(698) 00:16:03.216 fused_ordering(699) 00:16:03.216 fused_ordering(700) 00:16:03.216 fused_ordering(701) 00:16:03.216 fused_ordering(702) 00:16:03.216 fused_ordering(703) 00:16:03.216 fused_ordering(704) 00:16:03.216 fused_ordering(705) 00:16:03.216 fused_ordering(706) 00:16:03.217 fused_ordering(707) 00:16:03.217 fused_ordering(708) 00:16:03.217 fused_ordering(709) 00:16:03.217 fused_ordering(710) 00:16:03.217 fused_ordering(711) 00:16:03.217 fused_ordering(712) 00:16:03.217 fused_ordering(713) 00:16:03.217 fused_ordering(714) 00:16:03.217 fused_ordering(715) 00:16:03.217 fused_ordering(716) 00:16:03.217 fused_ordering(717) 00:16:03.217 fused_ordering(718) 00:16:03.217 fused_ordering(719) 00:16:03.217 fused_ordering(720) 00:16:03.217 fused_ordering(721) 00:16:03.217 fused_ordering(722) 00:16:03.217 fused_ordering(723) 00:16:03.217 fused_ordering(724) 00:16:03.217 fused_ordering(725) 00:16:03.217 fused_ordering(726) 00:16:03.217 fused_ordering(727) 00:16:03.217 fused_ordering(728) 00:16:03.217 fused_ordering(729) 00:16:03.217 fused_ordering(730) 00:16:03.217 fused_ordering(731) 00:16:03.217 fused_ordering(732) 00:16:03.217 fused_ordering(733) 00:16:03.217 fused_ordering(734) 00:16:03.217 fused_ordering(735) 00:16:03.217 fused_ordering(736) 00:16:03.217 fused_ordering(737) 00:16:03.217 fused_ordering(738) 00:16:03.217 fused_ordering(739) 00:16:03.217 fused_ordering(740) 00:16:03.217 fused_ordering(741) 00:16:03.217 fused_ordering(742) 00:16:03.217 fused_ordering(743) 00:16:03.217 fused_ordering(744) 00:16:03.217 fused_ordering(745) 00:16:03.217 fused_ordering(746) 00:16:03.217 fused_ordering(747) 00:16:03.217 fused_ordering(748) 00:16:03.217 fused_ordering(749) 00:16:03.217 fused_ordering(750) 00:16:03.217 fused_ordering(751) 00:16:03.217 fused_ordering(752) 00:16:03.217 fused_ordering(753) 00:16:03.217 fused_ordering(754) 00:16:03.217 fused_ordering(755) 00:16:03.217 fused_ordering(756) 00:16:03.217 fused_ordering(757) 00:16:03.217 fused_ordering(758) 00:16:03.217 fused_ordering(759) 00:16:03.217 fused_ordering(760) 00:16:03.217 fused_ordering(761) 00:16:03.217 fused_ordering(762) 00:16:03.217 fused_ordering(763) 00:16:03.217 fused_ordering(764) 00:16:03.217 fused_ordering(765) 00:16:03.217 fused_ordering(766) 00:16:03.217 fused_ordering(767) 00:16:03.217 fused_ordering(768) 00:16:03.217 fused_ordering(769) 00:16:03.217 fused_ordering(770) 00:16:03.217 fused_ordering(771) 00:16:03.217 fused_ordering(772) 00:16:03.217 fused_ordering(773) 00:16:03.217 fused_ordering(774) 00:16:03.217 fused_ordering(775) 00:16:03.217 fused_ordering(776) 00:16:03.217 fused_ordering(777) 00:16:03.217 fused_ordering(778) 00:16:03.217 fused_ordering(779) 00:16:03.217 fused_ordering(780) 00:16:03.217 fused_ordering(781) 00:16:03.217 fused_ordering(782) 00:16:03.217 fused_ordering(783) 00:16:03.217 fused_ordering(784) 00:16:03.217 fused_ordering(785) 00:16:03.217 fused_ordering(786) 00:16:03.217 fused_ordering(787) 00:16:03.217 fused_ordering(788) 00:16:03.217 fused_ordering(789) 00:16:03.217 fused_ordering(790) 00:16:03.217 fused_ordering(791) 00:16:03.217 fused_ordering(792) 00:16:03.217 fused_ordering(793) 00:16:03.217 fused_ordering(794) 00:16:03.217 fused_ordering(795) 00:16:03.217 fused_ordering(796) 00:16:03.217 fused_ordering(797) 00:16:03.217 fused_ordering(798) 00:16:03.217 fused_ordering(799) 00:16:03.217 fused_ordering(800) 00:16:03.217 fused_ordering(801) 00:16:03.217 fused_ordering(802) 00:16:03.217 fused_ordering(803) 00:16:03.217 fused_ordering(804) 00:16:03.217 fused_ordering(805) 00:16:03.217 fused_ordering(806) 00:16:03.217 fused_ordering(807) 00:16:03.217 fused_ordering(808) 00:16:03.217 fused_ordering(809) 00:16:03.217 fused_ordering(810) 00:16:03.217 fused_ordering(811) 00:16:03.217 fused_ordering(812) 00:16:03.217 fused_ordering(813) 00:16:03.217 fused_ordering(814) 00:16:03.217 fused_ordering(815) 00:16:03.217 fused_ordering(816) 00:16:03.217 fused_ordering(817) 00:16:03.217 fused_ordering(818) 00:16:03.217 fused_ordering(819) 00:16:03.217 fused_ordering(820) 00:16:03.477 fused_ordering(821) 00:16:03.477 fused_ordering(822) 00:16:03.477 fused_ordering(823) 00:16:03.477 fused_ordering(824) 00:16:03.477 fused_ordering(825) 00:16:03.477 fused_ordering(826) 00:16:03.477 fused_ordering(827) 00:16:03.477 fused_ordering(828) 00:16:03.477 fused_ordering(829) 00:16:03.477 fused_ordering(830) 00:16:03.477 fused_ordering(831) 00:16:03.477 fused_ordering(832) 00:16:03.477 fused_ordering(833) 00:16:03.477 fused_ordering(834) 00:16:03.477 fused_ordering(835) 00:16:03.477 fused_ordering(836) 00:16:03.477 fused_ordering(837) 00:16:03.477 fused_ordering(838) 00:16:03.477 fused_ordering(839) 00:16:03.477 fused_ordering(840) 00:16:03.477 fused_ordering(841) 00:16:03.477 fused_ordering(842) 00:16:03.477 fused_ordering(843) 00:16:03.477 fused_ordering(844) 00:16:03.477 fused_ordering(845) 00:16:03.477 fused_ordering(846) 00:16:03.477 fused_ordering(847) 00:16:03.477 fused_ordering(848) 00:16:03.477 fused_ordering(849) 00:16:03.477 fused_ordering(850) 00:16:03.477 fused_ordering(851) 00:16:03.477 fused_ordering(852) 00:16:03.477 fused_ordering(853) 00:16:03.477 fused_ordering(854) 00:16:03.477 fused_ordering(855) 00:16:03.477 fused_ordering(856) 00:16:03.477 fused_ordering(857) 00:16:03.477 fused_ordering(858) 00:16:03.477 fused_ordering(859) 00:16:03.477 fused_ordering(860) 00:16:03.477 fused_ordering(861) 00:16:03.477 fused_ordering(862) 00:16:03.477 fused_ordering(863) 00:16:03.477 fused_ordering(864) 00:16:03.477 fused_ordering(865) 00:16:03.477 fused_ordering(866) 00:16:03.477 fused_ordering(867) 00:16:03.477 fused_ordering(868) 00:16:03.477 fused_ordering(869) 00:16:03.477 fused_ordering(870) 00:16:03.477 fused_ordering(871) 00:16:03.477 fused_ordering(872) 00:16:03.477 fused_ordering(873) 00:16:03.477 fused_ordering(874) 00:16:03.477 fused_ordering(875) 00:16:03.477 fused_ordering(876) 00:16:03.477 fused_ordering(877) 00:16:03.477 fused_ordering(878) 00:16:03.477 fused_ordering(879) 00:16:03.477 fused_ordering(880) 00:16:03.477 fused_ordering(881) 00:16:03.477 fused_ordering(882) 00:16:03.477 fused_ordering(883) 00:16:03.477 fused_ordering(884) 00:16:03.477 fused_ordering(885) 00:16:03.477 fused_ordering(886) 00:16:03.477 fused_ordering(887) 00:16:03.477 fused_ordering(888) 00:16:03.477 fused_ordering(889) 00:16:03.477 fused_ordering(890) 00:16:03.477 fused_ordering(891) 00:16:03.477 fused_ordering(892) 00:16:03.477 fused_ordering(893) 00:16:03.477 fused_ordering(894) 00:16:03.477 fused_ordering(895) 00:16:03.477 fused_ordering(896) 00:16:03.477 fused_ordering(897) 00:16:03.477 fused_ordering(898) 00:16:03.477 fused_ordering(899) 00:16:03.477 fused_ordering(900) 00:16:03.477 fused_ordering(901) 00:16:03.477 fused_ordering(902) 00:16:03.477 fused_ordering(903) 00:16:03.477 fused_ordering(904) 00:16:03.477 fused_ordering(905) 00:16:03.477 fused_ordering(906) 00:16:03.477 fused_ordering(907) 00:16:03.477 fused_ordering(908) 00:16:03.477 fused_ordering(909) 00:16:03.477 fused_ordering(910) 00:16:03.477 fused_ordering(911) 00:16:03.477 fused_ordering(912) 00:16:03.477 fused_ordering(913) 00:16:03.477 fused_ordering(914) 00:16:03.477 fused_ordering(915) 00:16:03.477 fused_ordering(916) 00:16:03.477 fused_ordering(917) 00:16:03.477 fused_ordering(918) 00:16:03.477 fused_ordering(919) 00:16:03.477 fused_ordering(920) 00:16:03.477 fused_ordering(921) 00:16:03.477 fused_ordering(922) 00:16:03.477 fused_ordering(923) 00:16:03.477 fused_ordering(924) 00:16:03.477 fused_ordering(925) 00:16:03.477 fused_ordering(926) 00:16:03.477 fused_ordering(927) 00:16:03.477 fused_ordering(928) 00:16:03.477 fused_ordering(929) 00:16:03.477 fused_ordering(930) 00:16:03.477 fused_ordering(931) 00:16:03.477 fused_ordering(932) 00:16:03.477 fused_ordering(933) 00:16:03.477 fused_ordering(934) 00:16:03.477 fused_ordering(935) 00:16:03.477 fused_ordering(936) 00:16:03.477 fused_ordering(937) 00:16:03.477 fused_ordering(938) 00:16:03.477 fused_ordering(939) 00:16:03.477 fused_ordering(940) 00:16:03.477 fused_ordering(941) 00:16:03.477 fused_ordering(942) 00:16:03.477 fused_ordering(943) 00:16:03.477 fused_ordering(944) 00:16:03.477 fused_ordering(945) 00:16:03.477 fused_ordering(946) 00:16:03.477 fused_ordering(947) 00:16:03.477 fused_ordering(948) 00:16:03.477 fused_ordering(949) 00:16:03.477 fused_ordering(950) 00:16:03.477 fused_ordering(951) 00:16:03.477 fused_ordering(952) 00:16:03.477 fused_ordering(953) 00:16:03.477 fused_ordering(954) 00:16:03.477 fused_ordering(955) 00:16:03.477 fused_ordering(956) 00:16:03.477 fused_ordering(957) 00:16:03.477 fused_ordering(958) 00:16:03.477 fused_ordering(959) 00:16:03.477 fused_ordering(960) 00:16:03.477 fused_ordering(961) 00:16:03.477 fused_ordering(962) 00:16:03.477 fused_ordering(963) 00:16:03.477 fused_ordering(964) 00:16:03.477 fused_ordering(965) 00:16:03.477 fused_ordering(966) 00:16:03.477 fused_ordering(967) 00:16:03.477 fused_ordering(968) 00:16:03.477 fused_ordering(969) 00:16:03.477 fused_ordering(970) 00:16:03.477 fused_ordering(971) 00:16:03.477 fused_ordering(972) 00:16:03.477 fused_ordering(973) 00:16:03.477 fused_ordering(974) 00:16:03.477 fused_ordering(975) 00:16:03.477 fused_ordering(976) 00:16:03.477 fused_ordering(977) 00:16:03.477 fused_ordering(978) 00:16:03.477 fused_ordering(979) 00:16:03.477 fused_ordering(980) 00:16:03.477 fused_ordering(981) 00:16:03.477 fused_ordering(982) 00:16:03.477 fused_ordering(983) 00:16:03.477 fused_ordering(984) 00:16:03.477 fused_ordering(985) 00:16:03.477 fused_ordering(986) 00:16:03.477 fused_ordering(987) 00:16:03.477 fused_ordering(988) 00:16:03.477 fused_ordering(989) 00:16:03.477 fused_ordering(990) 00:16:03.477 fused_ordering(991) 00:16:03.477 fused_ordering(992) 00:16:03.477 fused_ordering(993) 00:16:03.477 fused_ordering(994) 00:16:03.477 fused_ordering(995) 00:16:03.477 fused_ordering(996) 00:16:03.477 fused_ordering(997) 00:16:03.477 fused_ordering(998) 00:16:03.477 fused_ordering(999) 00:16:03.477 fused_ordering(1000) 00:16:03.477 fused_ordering(1001) 00:16:03.477 fused_ordering(1002) 00:16:03.477 fused_ordering(1003) 00:16:03.477 fused_ordering(1004) 00:16:03.477 fused_ordering(1005) 00:16:03.477 fused_ordering(1006) 00:16:03.477 fused_ordering(1007) 00:16:03.477 fused_ordering(1008) 00:16:03.477 fused_ordering(1009) 00:16:03.477 fused_ordering(1010) 00:16:03.477 fused_ordering(1011) 00:16:03.477 fused_ordering(1012) 00:16:03.477 fused_ordering(1013) 00:16:03.477 fused_ordering(1014) 00:16:03.477 fused_ordering(1015) 00:16:03.477 fused_ordering(1016) 00:16:03.477 fused_ordering(1017) 00:16:03.477 fused_ordering(1018) 00:16:03.477 fused_ordering(1019) 00:16:03.477 fused_ordering(1020) 00:16:03.477 fused_ordering(1021) 00:16:03.477 fused_ordering(1022) 00:16:03.477 fused_ordering(1023) 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:03.477 rmmod nvme_rdma 00:16:03.477 rmmod nvme_fabrics 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2667996 ']' 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2667996 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2667996 ']' 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2667996 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:03.477 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2667996 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2667996' 00:16:03.478 killing process with pid 2667996 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2667996 00:16:03.478 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2667996 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:03.737 00:16:03.737 real 0m10.066s 00:16:03.737 user 0m4.888s 00:16:03.737 sys 0m6.430s 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:03.737 ************************************ 00:16:03.737 END TEST nvmf_fused_ordering 00:16:03.737 ************************************ 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.737 ************************************ 00:16:03.737 START TEST nvmf_ns_masking 00:16:03.737 ************************************ 00:16:03.737 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:16:03.997 * Looking for test storage... 00:16:03.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2091deb8-5902-4ada-875a-45e851619ce5 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:03.997 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3d1194a2-20a8-4cf3-8775-752be40584dc 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fc96f3cd-30b6-470c-90d5-4cfef1f55ed0 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.998 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:13.998 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:13.998 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:13.998 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.998 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:13.999 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:13.999 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:13.999 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:13.999 altname enp217s0f0np0 00:16:13.999 altname ens818f0np0 00:16:13.999 inet 192.168.100.8/24 scope global mlx_0_0 00:16:13.999 valid_lft forever preferred_lft forever 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:13.999 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:13.999 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:13.999 altname enp217s0f1np1 00:16:13.999 altname ens818f1np1 00:16:13.999 inet 192.168.100.9/24 scope global mlx_0_1 00:16:13.999 valid_lft forever preferred_lft forever 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:13.999 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:13.999 192.168.100.9' 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:13.999 192.168.100.9' 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:13.999 192.168.100.9' 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:13.999 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2672453 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2672453 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2672453 ']' 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.000 [2024-07-25 07:21:45.134711] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:16:14.000 [2024-07-25 07:21:45.134760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.000 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.000 [2024-07-25 07:21:45.217958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.000 [2024-07-25 07:21:45.292876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.000 [2024-07-25 07:21:45.292917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.000 [2024-07-25 07:21:45.292927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.000 [2024-07-25 07:21:45.292936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.000 [2024-07-25 07:21:45.292943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.000 [2024-07-25 07:21:45.292965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.000 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:14.000 [2024-07-25 07:21:46.148545] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f6db90/0x1f72080) succeed. 00:16:14.000 [2024-07-25 07:21:46.157883] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f6f090/0x1fb3710) succeed. 00:16:14.000 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:14.000 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:14.000 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:14.000 Malloc1 00:16:14.000 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:14.259 Malloc2 00:16:14.259 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.259 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:14.519 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:14.778 [2024-07-25 07:21:47.070264] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:14.778 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:14.778 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc96f3cd-30b6-470c-90d5-4cfef1f55ed0 -a 192.168.100.8 -s 4420 -i 4 00:16:15.038 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.038 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.038 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.038 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.038 07:21:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.945 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.945 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.945 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.945 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:16.946 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.205 [ 0]:0x1 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3420d8ba407f4b2e8bf2b5d69e1a5a5f 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3420d8ba407f4b2e8bf2b5d69e1a5a5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.205 [ 0]:0x1 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.205 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3420d8ba407f4b2e8bf2b5d69e1a5a5f 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3420d8ba407f4b2e8bf2b5d69e1a5a5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:17.464 [ 1]:0x2 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:17.464 07:21:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.724 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.983 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:18.241 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:18.241 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc96f3cd-30b6-470c-90d5-4cfef1f55ed0 -a 192.168.100.8 -s 4420 -i 4 00:16:18.499 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:18.499 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.500 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.500 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:18.500 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:18.500 07:21:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:20.404 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.663 [ 0]:0x2 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.663 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.663 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:20.663 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.663 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.923 [ 0]:0x1 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3420d8ba407f4b2e8bf2b5d69e1a5a5f 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3420d8ba407f4b2e8bf2b5d69e1a5a5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.923 [ 1]:0x2 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.923 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.181 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.182 [ 0]:0x2 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:21.182 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.440 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:21.699 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:21.699 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc96f3cd-30b6-470c-90d5-4cfef1f55ed0 -a 192.168.100.8 -s 4420 -i 4 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:21.959 07:21:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:23.922 [ 0]:0x1 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.922 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3420d8ba407f4b2e8bf2b5d69e1a5a5f 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3420d8ba407f4b2e8bf2b5d69e1a5a5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:24.180 [ 1]:0x2 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.180 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.181 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:24.440 [ 0]:0x2 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:16:24.440 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:24.699 [2024-07-25 07:21:56.969600] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:24.699 request: 00:16:24.699 { 00:16:24.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.699 "nsid": 2, 00:16:24.699 "host": "nqn.2016-06.io.spdk:host1", 00:16:24.699 "method": "nvmf_ns_remove_host", 00:16:24.699 "req_id": 1 00:16:24.699 } 00:16:24.699 Got JSON-RPC error response 00:16:24.699 response: 00:16:24.699 { 00:16:24.699 "code": -32602, 00:16:24.699 "message": "Invalid parameters" 00:16:24.699 } 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.699 07:21:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:24.699 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:24.700 [ 0]:0x2 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f46f711027314cfea21e9633646aa7d4 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f46f711027314cfea21e9633646aa7d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:24.700 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2674746 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2674746 /var/tmp/host.sock 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2674746 ']' 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.959 07:21:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 [2024-07-25 07:21:57.489237] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:16:25.218 [2024-07-25 07:21:57.489291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674746 ] 00:16:25.218 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.218 [2024-07-25 07:21:57.573114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.218 [2024-07-25 07:21:57.643587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.787 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.787 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:25.787 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.046 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2091deb8-5902-4ada-875a-45e851619ce5 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2091DEB859024ADA875A45E851619CE5 -i 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3d1194a2-20a8-4cf3-8775-752be40584dc 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:26.306 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3D1194A220A84CF38775752BE40584DC -i 00:16:26.565 07:21:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:26.824 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:26.824 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:26.824 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:27.082 nvme0n1 00:16:27.083 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:27.083 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:27.342 nvme1n2 00:16:27.342 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:27.342 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:27.342 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:27.342 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:27.342 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:27.600 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:27.600 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:27.600 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:27.600 07:21:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2091deb8-5902-4ada-875a-45e851619ce5 == \2\0\9\1\d\e\b\8\-\5\9\0\2\-\4\a\d\a\-\8\7\5\a\-\4\5\e\8\5\1\6\1\9\c\e\5 ]] 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3d1194a2-20a8-4cf3-8775-752be40584dc == \3\d\1\1\9\4\a\2\-\2\0\a\8\-\4\c\f\3\-\8\7\7\5\-\7\5\2\b\e\4\0\5\8\4\d\c ]] 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2674746 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2674746 ']' 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2674746 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.859 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2674746 00:16:28.118 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:28.118 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:28.118 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2674746' 00:16:28.118 killing process with pid 2674746 00:16:28.118 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2674746 00:16:28.118 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2674746 00:16:28.377 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.377 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:28.377 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:28.377 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.377 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:28.636 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:28.636 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:28.636 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:28.636 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:28.637 rmmod nvme_rdma 00:16:28.637 rmmod nvme_fabrics 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2672453 ']' 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2672453 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2672453 ']' 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2672453 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.637 07:22:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2672453 00:16:28.637 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.637 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.637 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2672453' 00:16:28.637 killing process with pid 2672453 00:16:28.637 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2672453 00:16:28.637 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2672453 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:28.896 00:16:28.896 real 0m25.068s 00:16:28.896 user 0m26.117s 00:16:28.896 sys 0m8.990s 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:28.896 ************************************ 00:16:28.896 END TEST nvmf_ns_masking 00:16:28.896 ************************************ 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.896 ************************************ 00:16:28.896 START TEST nvmf_nvme_cli 00:16:28.896 ************************************ 00:16:28.896 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:29.156 * Looking for test storage... 00:16:29.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.156 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.157 07:22:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:37.301 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:37.301 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:37.301 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:37.301 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:37.301 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:37.302 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.302 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:37.302 altname enp217s0f0np0 00:16:37.302 altname ens818f0np0 00:16:37.302 inet 192.168.100.8/24 scope global mlx_0_0 00:16:37.302 valid_lft forever preferred_lft forever 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:37.302 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.302 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:37.302 altname enp217s0f1np1 00:16:37.302 altname ens818f1np1 00:16:37.302 inet 192.168.100.9/24 scope global mlx_0_1 00:16:37.302 valid_lft forever preferred_lft forever 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:37.302 192.168.100.9' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:37.302 192.168.100.9' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:37.302 192.168.100.9' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2679467 00:16:37.302 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2679467 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2679467 ']' 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.303 07:22:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.303 [2024-07-25 07:22:09.365312] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:16:37.303 [2024-07-25 07:22:09.365365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.303 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.303 [2024-07-25 07:22:09.450332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.303 [2024-07-25 07:22:09.524574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.303 [2024-07-25 07:22:09.524613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.303 [2024-07-25 07:22:09.524622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.303 [2024-07-25 07:22:09.524637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.303 [2024-07-25 07:22:09.524644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.303 [2024-07-25 07:22:09.524695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.303 [2024-07-25 07:22:09.524718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.303 [2024-07-25 07:22:09.524829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.303 [2024-07-25 07:22:09.524827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.871 [2024-07-25 07:22:10.246664] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x222bdd0/0x22302c0) succeed. 00:16:37.871 [2024-07-25 07:22:10.255781] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x222d410/0x2271950) succeed. 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.871 Malloc0 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.871 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 Malloc1 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 [2024-07-25 07:22:10.452070] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:16:38.132 00:16:38.132 Discovery Log Number of Records 2, Generation counter 2 00:16:38.132 =====Discovery Log Entry 0====== 00:16:38.132 trtype: rdma 00:16:38.132 adrfam: ipv4 00:16:38.132 subtype: current discovery subsystem 00:16:38.132 treq: not required 00:16:38.132 portid: 0 00:16:38.132 trsvcid: 4420 00:16:38.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:38.132 traddr: 192.168.100.8 00:16:38.132 eflags: explicit discovery connections, duplicate discovery information 00:16:38.132 rdma_prtype: not specified 00:16:38.132 rdma_qptype: connected 00:16:38.132 rdma_cms: rdma-cm 00:16:38.132 rdma_pkey: 0x0000 00:16:38.132 =====Discovery Log Entry 1====== 00:16:38.132 trtype: rdma 00:16:38.132 adrfam: ipv4 00:16:38.132 subtype: nvme subsystem 00:16:38.132 treq: not required 00:16:38.132 portid: 0 00:16:38.132 trsvcid: 4420 00:16:38.132 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:38.132 traddr: 192.168.100.8 00:16:38.132 eflags: none 00:16:38.132 rdma_prtype: not specified 00:16:38.132 rdma_qptype: connected 00:16:38.132 rdma_cms: rdma-cm 00:16:38.132 rdma_pkey: 0x0000 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:38.132 07:22:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:39.069 07:22:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:41.606 /dev/nvme0n1 ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:41.606 07:22:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:42.175 rmmod nvme_rdma 00:16:42.175 rmmod nvme_fabrics 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2679467 ']' 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2679467 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2679467 ']' 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2679467 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.175 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2679467 00:16:42.448 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.448 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.448 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2679467' 00:16:42.448 killing process with pid 2679467 00:16:42.448 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2679467 00:16:42.448 07:22:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2679467 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:42.730 00:16:42.730 real 0m13.698s 00:16:42.730 user 0m24.001s 00:16:42.730 sys 0m6.694s 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:42.730 ************************************ 00:16:42.730 END TEST nvmf_nvme_cli 00:16:42.730 ************************************ 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.730 ************************************ 00:16:42.730 START TEST nvmf_auth_target 00:16:42.730 ************************************ 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:42.730 * Looking for test storage... 00:16:42.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.730 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.731 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:42.731 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:42.731 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.731 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.731 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.990 07:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:51.111 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.111 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:51.112 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:51.112 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:51.112 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:51.112 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.112 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:51.112 altname enp217s0f0np0 00:16:51.112 altname ens818f0np0 00:16:51.112 inet 192.168.100.8/24 scope global mlx_0_0 00:16:51.112 valid_lft forever preferred_lft forever 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:51.112 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.112 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:51.112 altname enp217s0f1np1 00:16:51.112 altname ens818f1np1 00:16:51.112 inet 192.168.100.9/24 scope global mlx_0_1 00:16:51.112 valid_lft forever preferred_lft forever 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:51.112 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:51.371 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:51.372 192.168.100.9' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:51.372 192.168.100.9' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:51.372 192.168.100.9' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2684487 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2684487 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2684487 ']' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.372 07:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2684646 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2dd3879fde7394a9a67413fedd272af1bfce940e80ec4087 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.J7a 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2dd3879fde7394a9a67413fedd272af1bfce940e80ec4087 0 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2dd3879fde7394a9a67413fedd272af1bfce940e80ec4087 0 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2dd3879fde7394a9a67413fedd272af1bfce940e80ec4087 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.J7a 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.J7a 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.J7a 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=21245d8ea6e5ee8a0d8f7009333ffc679e7e3696afbea90de4899709e9c517de 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FoM 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 21245d8ea6e5ee8a0d8f7009333ffc679e7e3696afbea90de4899709e9c517de 3 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 21245d8ea6e5ee8a0d8f7009333ffc679e7e3696afbea90de4899709e9c517de 3 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=21245d8ea6e5ee8a0d8f7009333ffc679e7e3696afbea90de4899709e9c517de 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FoM 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FoM 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.FoM 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=624e9fa7a06b44ac6a20b4ec63330009 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6uS 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 624e9fa7a06b44ac6a20b4ec63330009 1 00:16:52.310 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 624e9fa7a06b44ac6a20b4ec63330009 1 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=624e9fa7a06b44ac6a20b4ec63330009 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6uS 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6uS 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.6uS 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:52.311 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=65ab55bda0a60804d459525884e083d364397ca7ed21f56d 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0t3 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 65ab55bda0a60804d459525884e083d364397ca7ed21f56d 2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 65ab55bda0a60804d459525884e083d364397ca7ed21f56d 2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=65ab55bda0a60804d459525884e083d364397ca7ed21f56d 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0t3 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0t3 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.0t3 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2cd691a64f01c7c455f1351895fc34aa4e2a448d92d6a7cc 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Z9g 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2cd691a64f01c7c455f1351895fc34aa4e2a448d92d6a7cc 2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2cd691a64f01c7c455f1351895fc34aa4e2a448d92d6a7cc 2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2cd691a64f01c7c455f1351895fc34aa4e2a448d92d6a7cc 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Z9g 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Z9g 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Z9g 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=252983f4fb6507d9892b05b1e4f24c21 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3Tf 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 252983f4fb6507d9892b05b1e4f24c21 1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 252983f4fb6507d9892b05b1e4f24c21 1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=252983f4fb6507d9892b05b1e4f24c21 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:52.571 07:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3Tf 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3Tf 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.3Tf 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6174522a0cd422974df3f6b8cb467eb63d84567b243ecd84a2065db19772b556 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mby 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6174522a0cd422974df3f6b8cb467eb63d84567b243ecd84a2065db19772b556 3 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6174522a0cd422974df3f6b8cb467eb63d84567b243ecd84a2065db19772b556 3 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6174522a0cd422974df3f6b8cb467eb63d84567b243ecd84a2065db19772b556 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mby 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mby 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.mby 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2684487 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2684487 ']' 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.571 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.831 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.831 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:52.831 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2684646 /var/tmp/host.sock 00:16:52.831 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2684646 ']' 00:16:52.832 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:52.832 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.832 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:52.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:52.832 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.832 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J7a 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.J7a 00:16:53.091 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.J7a 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.FoM ]] 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FoM 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FoM 00:16:53.350 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FoM 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6uS 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6uS 00:16:53.610 07:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6uS 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.0t3 ]] 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0t3 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0t3 00:16:53.610 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0t3 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Z9g 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Z9g 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Z9g 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.3Tf ]] 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Tf 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Tf 00:16:53.869 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Tf 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mby 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.mby 00:16:54.129 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.mby 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.388 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.648 07:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.648 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.908 { 00:16:54.908 "cntlid": 1, 00:16:54.908 "qid": 0, 00:16:54.908 "state": "enabled", 00:16:54.908 "thread": "nvmf_tgt_poll_group_000", 00:16:54.908 "listen_address": { 00:16:54.908 "trtype": "RDMA", 00:16:54.908 "adrfam": "IPv4", 00:16:54.908 "traddr": "192.168.100.8", 00:16:54.908 "trsvcid": "4420" 00:16:54.908 }, 00:16:54.908 "peer_address": { 00:16:54.908 "trtype": "RDMA", 00:16:54.908 "adrfam": "IPv4", 00:16:54.908 "traddr": "192.168.100.8", 00:16:54.908 "trsvcid": "53470" 00:16:54.908 }, 00:16:54.908 "auth": { 00:16:54.908 "state": "completed", 00:16:54.908 "digest": "sha256", 00:16:54.908 "dhgroup": "null" 00:16:54.908 } 00:16:54.908 } 00:16:54.908 ]' 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.908 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.167 07:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.105 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.106 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.365 00:16:56.365 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.365 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.365 07:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.623 { 00:16:56.623 "cntlid": 3, 00:16:56.623 "qid": 0, 00:16:56.623 "state": "enabled", 00:16:56.623 "thread": "nvmf_tgt_poll_group_000", 00:16:56.623 "listen_address": { 00:16:56.623 "trtype": "RDMA", 00:16:56.623 "adrfam": "IPv4", 00:16:56.623 "traddr": "192.168.100.8", 00:16:56.623 "trsvcid": "4420" 00:16:56.623 }, 00:16:56.623 "peer_address": { 00:16:56.623 "trtype": "RDMA", 00:16:56.623 "adrfam": "IPv4", 00:16:56.623 "traddr": "192.168.100.8", 00:16:56.623 "trsvcid": "38011" 00:16:56.623 }, 00:16:56.623 "auth": { 00:16:56.623 "state": "completed", 00:16:56.623 "digest": "sha256", 00:16:56.623 "dhgroup": "null" 00:16:56.623 } 00:16:56.623 } 00:16:56.623 ]' 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:56.623 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.883 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.883 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.883 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.883 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:16:57.452 07:22:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.711 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.969 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.970 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.970 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.229 { 00:16:58.229 "cntlid": 5, 00:16:58.229 "qid": 0, 00:16:58.229 "state": "enabled", 00:16:58.229 "thread": "nvmf_tgt_poll_group_000", 00:16:58.229 "listen_address": { 00:16:58.229 "trtype": "RDMA", 00:16:58.229 "adrfam": "IPv4", 00:16:58.229 "traddr": "192.168.100.8", 00:16:58.229 "trsvcid": "4420" 00:16:58.229 }, 00:16:58.229 "peer_address": { 00:16:58.229 "trtype": "RDMA", 00:16:58.229 "adrfam": "IPv4", 00:16:58.229 "traddr": "192.168.100.8", 00:16:58.229 "trsvcid": "56189" 00:16:58.229 }, 00:16:58.229 "auth": { 00:16:58.229 "state": "completed", 00:16:58.229 "digest": "sha256", 00:16:58.229 "dhgroup": "null" 00:16:58.229 } 00:16:58.229 } 00:16:58.229 ]' 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.229 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.488 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:58.488 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.488 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.488 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.488 07:22:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.747 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.316 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.575 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.576 07:22:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.834 00:16:59.834 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.834 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.834 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.094 { 00:17:00.094 "cntlid": 7, 00:17:00.094 "qid": 0, 00:17:00.094 "state": "enabled", 00:17:00.094 "thread": "nvmf_tgt_poll_group_000", 00:17:00.094 "listen_address": { 00:17:00.094 "trtype": "RDMA", 00:17:00.094 "adrfam": "IPv4", 00:17:00.094 "traddr": "192.168.100.8", 00:17:00.094 "trsvcid": "4420" 00:17:00.094 }, 00:17:00.094 "peer_address": { 00:17:00.094 "trtype": "RDMA", 00:17:00.094 "adrfam": "IPv4", 00:17:00.094 "traddr": "192.168.100.8", 00:17:00.094 "trsvcid": "55394" 00:17:00.094 }, 00:17:00.094 "auth": { 00:17:00.094 "state": "completed", 00:17:00.094 "digest": "sha256", 00:17:00.094 "dhgroup": "null" 00:17:00.094 } 00:17:00.094 } 00:17:00.094 ]' 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.094 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.353 07:22:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.985 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.244 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.502 00:17:01.502 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.502 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.502 07:22:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.502 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.761 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.762 { 00:17:01.762 "cntlid": 9, 00:17:01.762 "qid": 0, 00:17:01.762 "state": "enabled", 00:17:01.762 "thread": "nvmf_tgt_poll_group_000", 00:17:01.762 "listen_address": { 00:17:01.762 "trtype": "RDMA", 00:17:01.762 "adrfam": "IPv4", 00:17:01.762 "traddr": "192.168.100.8", 00:17:01.762 "trsvcid": "4420" 00:17:01.762 }, 00:17:01.762 "peer_address": { 00:17:01.762 "trtype": "RDMA", 00:17:01.762 "adrfam": "IPv4", 00:17:01.762 "traddr": "192.168.100.8", 00:17:01.762 "trsvcid": "59081" 00:17:01.762 }, 00:17:01.762 "auth": { 00:17:01.762 "state": "completed", 00:17:01.762 "digest": "sha256", 00:17:01.762 "dhgroup": "ffdhe2048" 00:17:01.762 } 00:17:01.762 } 00:17:01.762 ]' 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.762 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.021 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:02.589 07:22:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.589 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.848 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.108 00:17:03.108 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.108 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.108 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.367 { 00:17:03.367 "cntlid": 11, 00:17:03.367 "qid": 0, 00:17:03.367 "state": "enabled", 00:17:03.367 "thread": "nvmf_tgt_poll_group_000", 00:17:03.367 "listen_address": { 00:17:03.367 "trtype": "RDMA", 00:17:03.367 "adrfam": "IPv4", 00:17:03.367 "traddr": "192.168.100.8", 00:17:03.367 "trsvcid": "4420" 00:17:03.367 }, 00:17:03.367 "peer_address": { 00:17:03.367 "trtype": "RDMA", 00:17:03.367 "adrfam": "IPv4", 00:17:03.367 "traddr": "192.168.100.8", 00:17:03.367 "trsvcid": "49383" 00:17:03.367 }, 00:17:03.367 "auth": { 00:17:03.367 "state": "completed", 00:17:03.367 "digest": "sha256", 00:17:03.367 "dhgroup": "ffdhe2048" 00:17:03.367 } 00:17:03.367 } 00:17:03.367 ]' 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.367 07:22:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.626 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.195 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.454 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.455 07:22:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.714 00:17:04.714 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.714 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.714 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.973 { 00:17:04.973 "cntlid": 13, 00:17:04.973 "qid": 0, 00:17:04.973 "state": "enabled", 00:17:04.973 "thread": "nvmf_tgt_poll_group_000", 00:17:04.973 "listen_address": { 00:17:04.973 "trtype": "RDMA", 00:17:04.973 "adrfam": "IPv4", 00:17:04.973 "traddr": "192.168.100.8", 00:17:04.973 "trsvcid": "4420" 00:17:04.973 }, 00:17:04.973 "peer_address": { 00:17:04.973 "trtype": "RDMA", 00:17:04.973 "adrfam": "IPv4", 00:17:04.973 "traddr": "192.168.100.8", 00:17:04.973 "trsvcid": "46472" 00:17:04.973 }, 00:17:04.973 "auth": { 00:17:04.973 "state": "completed", 00:17:04.973 "digest": "sha256", 00:17:04.973 "dhgroup": "ffdhe2048" 00:17:04.973 } 00:17:04.973 } 00:17:04.973 ]' 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.973 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.232 07:22:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:05.800 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.060 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.319 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.319 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.578 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.578 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.578 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.578 07:22:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.578 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.578 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.578 { 00:17:06.578 "cntlid": 15, 00:17:06.578 "qid": 0, 00:17:06.578 "state": "enabled", 00:17:06.578 "thread": "nvmf_tgt_poll_group_000", 00:17:06.578 "listen_address": { 00:17:06.578 "trtype": "RDMA", 00:17:06.578 "adrfam": "IPv4", 00:17:06.578 "traddr": "192.168.100.8", 00:17:06.578 "trsvcid": "4420" 00:17:06.578 }, 00:17:06.578 "peer_address": { 00:17:06.578 "trtype": "RDMA", 00:17:06.578 "adrfam": "IPv4", 00:17:06.578 "traddr": "192.168.100.8", 00:17:06.578 "trsvcid": "43814" 00:17:06.578 }, 00:17:06.578 "auth": { 00:17:06.578 "state": "completed", 00:17:06.578 "digest": "sha256", 00:17:06.578 "dhgroup": "ffdhe2048" 00:17:06.578 } 00:17:06.578 } 00:17:06.578 ]' 00:17:06.578 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.578 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.578 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.579 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.579 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.838 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.838 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.838 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.838 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:07.406 07:22:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.666 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.925 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.184 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.184 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.184 { 00:17:08.184 "cntlid": 17, 00:17:08.184 "qid": 0, 00:17:08.184 "state": "enabled", 00:17:08.184 "thread": "nvmf_tgt_poll_group_000", 00:17:08.184 "listen_address": { 00:17:08.184 "trtype": "RDMA", 00:17:08.184 "adrfam": "IPv4", 00:17:08.184 "traddr": "192.168.100.8", 00:17:08.184 "trsvcid": "4420" 00:17:08.184 }, 00:17:08.184 "peer_address": { 00:17:08.184 "trtype": "RDMA", 00:17:08.184 "adrfam": "IPv4", 00:17:08.184 "traddr": "192.168.100.8", 00:17:08.184 "trsvcid": "44313" 00:17:08.184 }, 00:17:08.184 "auth": { 00:17:08.184 "state": "completed", 00:17:08.184 "digest": "sha256", 00:17:08.184 "dhgroup": "ffdhe3072" 00:17:08.185 } 00:17:08.185 } 00:17:08.185 ]' 00:17:08.185 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.444 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.704 07:22:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.272 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.532 07:22:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.791 00:17:09.791 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.791 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.791 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.050 { 00:17:10.050 "cntlid": 19, 00:17:10.050 "qid": 0, 00:17:10.050 "state": "enabled", 00:17:10.050 "thread": "nvmf_tgt_poll_group_000", 00:17:10.050 "listen_address": { 00:17:10.050 "trtype": "RDMA", 00:17:10.050 "adrfam": "IPv4", 00:17:10.050 "traddr": "192.168.100.8", 00:17:10.050 "trsvcid": "4420" 00:17:10.050 }, 00:17:10.050 "peer_address": { 00:17:10.050 "trtype": "RDMA", 00:17:10.050 "adrfam": "IPv4", 00:17:10.050 "traddr": "192.168.100.8", 00:17:10.050 "trsvcid": "55858" 00:17:10.050 }, 00:17:10.050 "auth": { 00:17:10.050 "state": "completed", 00:17:10.050 "digest": "sha256", 00:17:10.050 "dhgroup": "ffdhe3072" 00:17:10.050 } 00:17:10.050 } 00:17:10.050 ]' 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.050 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.308 07:22:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:10.876 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.135 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.394 00:17:11.394 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.394 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.394 07:22:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.652 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.652 { 00:17:11.652 "cntlid": 21, 00:17:11.652 "qid": 0, 00:17:11.652 "state": "enabled", 00:17:11.652 "thread": "nvmf_tgt_poll_group_000", 00:17:11.652 "listen_address": { 00:17:11.652 "trtype": "RDMA", 00:17:11.652 "adrfam": "IPv4", 00:17:11.652 "traddr": "192.168.100.8", 00:17:11.652 "trsvcid": "4420" 00:17:11.652 }, 00:17:11.652 "peer_address": { 00:17:11.652 "trtype": "RDMA", 00:17:11.652 "adrfam": "IPv4", 00:17:11.652 "traddr": "192.168.100.8", 00:17:11.652 "trsvcid": "55514" 00:17:11.653 }, 00:17:11.653 "auth": { 00:17:11.653 "state": "completed", 00:17:11.653 "digest": "sha256", 00:17:11.653 "dhgroup": "ffdhe3072" 00:17:11.653 } 00:17:11.653 } 00:17:11.653 ]' 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.653 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.911 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:12.479 07:22:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.738 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.997 00:17:12.997 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.997 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.997 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.256 { 00:17:13.256 "cntlid": 23, 00:17:13.256 "qid": 0, 00:17:13.256 "state": "enabled", 00:17:13.256 "thread": "nvmf_tgt_poll_group_000", 00:17:13.256 "listen_address": { 00:17:13.256 "trtype": "RDMA", 00:17:13.256 "adrfam": "IPv4", 00:17:13.256 "traddr": "192.168.100.8", 00:17:13.256 "trsvcid": "4420" 00:17:13.256 }, 00:17:13.256 "peer_address": { 00:17:13.256 "trtype": "RDMA", 00:17:13.256 "adrfam": "IPv4", 00:17:13.256 "traddr": "192.168.100.8", 00:17:13.256 "trsvcid": "39533" 00:17:13.256 }, 00:17:13.256 "auth": { 00:17:13.256 "state": "completed", 00:17:13.256 "digest": "sha256", 00:17:13.256 "dhgroup": "ffdhe3072" 00:17:13.256 } 00:17:13.256 } 00:17:13.256 ]' 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.256 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.515 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.515 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.515 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.515 07:22:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:14.114 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.373 07:22:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.632 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.891 { 00:17:14.891 "cntlid": 25, 00:17:14.891 "qid": 0, 00:17:14.891 "state": "enabled", 00:17:14.891 "thread": "nvmf_tgt_poll_group_000", 00:17:14.891 "listen_address": { 00:17:14.891 "trtype": "RDMA", 00:17:14.891 "adrfam": "IPv4", 00:17:14.891 "traddr": "192.168.100.8", 00:17:14.891 "trsvcid": "4420" 00:17:14.891 }, 00:17:14.891 "peer_address": { 00:17:14.891 "trtype": "RDMA", 00:17:14.891 "adrfam": "IPv4", 00:17:14.891 "traddr": "192.168.100.8", 00:17:14.891 "trsvcid": "37244" 00:17:14.891 }, 00:17:14.891 "auth": { 00:17:14.891 "state": "completed", 00:17:14.891 "digest": "sha256", 00:17:14.891 "dhgroup": "ffdhe4096" 00:17:14.891 } 00:17:14.891 } 00:17:14.891 ]' 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.891 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.150 07:22:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:15.718 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.977 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.236 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.496 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.496 07:22:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.496 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.496 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.496 { 00:17:16.496 "cntlid": 27, 00:17:16.496 "qid": 0, 00:17:16.496 "state": "enabled", 00:17:16.496 "thread": "nvmf_tgt_poll_group_000", 00:17:16.496 "listen_address": { 00:17:16.496 "trtype": "RDMA", 00:17:16.496 "adrfam": "IPv4", 00:17:16.496 "traddr": "192.168.100.8", 00:17:16.496 "trsvcid": "4420" 00:17:16.496 }, 00:17:16.496 "peer_address": { 00:17:16.496 "trtype": "RDMA", 00:17:16.496 "adrfam": "IPv4", 00:17:16.496 "traddr": "192.168.100.8", 00:17:16.496 "trsvcid": "38112" 00:17:16.496 }, 00:17:16.496 "auth": { 00:17:16.496 "state": "completed", 00:17:16.496 "digest": "sha256", 00:17:16.496 "dhgroup": "ffdhe4096" 00:17:16.496 } 00:17:16.496 } 00:17:16.496 ]' 00:17:16.496 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.756 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.015 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:17.584 07:22:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.584 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.843 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.844 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.844 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.844 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.844 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.102 00:17:18.102 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.102 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.102 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.362 { 00:17:18.362 "cntlid": 29, 00:17:18.362 "qid": 0, 00:17:18.362 "state": "enabled", 00:17:18.362 "thread": "nvmf_tgt_poll_group_000", 00:17:18.362 "listen_address": { 00:17:18.362 "trtype": "RDMA", 00:17:18.362 "adrfam": "IPv4", 00:17:18.362 "traddr": "192.168.100.8", 00:17:18.362 "trsvcid": "4420" 00:17:18.362 }, 00:17:18.362 "peer_address": { 00:17:18.362 "trtype": "RDMA", 00:17:18.362 "adrfam": "IPv4", 00:17:18.362 "traddr": "192.168.100.8", 00:17:18.362 "trsvcid": "45007" 00:17:18.362 }, 00:17:18.362 "auth": { 00:17:18.362 "state": "completed", 00:17:18.362 "digest": "sha256", 00:17:18.362 "dhgroup": "ffdhe4096" 00:17:18.362 } 00:17:18.362 } 00:17:18.362 ]' 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.362 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.621 07:22:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:19.190 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.191 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.450 07:22:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.709 00:17:19.709 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.709 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.709 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.968 { 00:17:19.968 "cntlid": 31, 00:17:19.968 "qid": 0, 00:17:19.968 "state": "enabled", 00:17:19.968 "thread": "nvmf_tgt_poll_group_000", 00:17:19.968 "listen_address": { 00:17:19.968 "trtype": "RDMA", 00:17:19.968 "adrfam": "IPv4", 00:17:19.968 "traddr": "192.168.100.8", 00:17:19.968 "trsvcid": "4420" 00:17:19.968 }, 00:17:19.968 "peer_address": { 00:17:19.968 "trtype": "RDMA", 00:17:19.968 "adrfam": "IPv4", 00:17:19.968 "traddr": "192.168.100.8", 00:17:19.968 "trsvcid": "56941" 00:17:19.968 }, 00:17:19.968 "auth": { 00:17:19.968 "state": "completed", 00:17:19.968 "digest": "sha256", 00:17:19.968 "dhgroup": "ffdhe4096" 00:17:19.968 } 00:17:19.968 } 00:17:19.968 ]' 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.968 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.969 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.969 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.969 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.969 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.969 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.227 07:22:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:20.795 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.795 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:20.795 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.795 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.053 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.621 00:17:21.621 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.621 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.621 07:22:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.621 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.621 { 00:17:21.621 "cntlid": 33, 00:17:21.621 "qid": 0, 00:17:21.621 "state": "enabled", 00:17:21.621 "thread": "nvmf_tgt_poll_group_000", 00:17:21.621 "listen_address": { 00:17:21.621 "trtype": "RDMA", 00:17:21.621 "adrfam": "IPv4", 00:17:21.621 "traddr": "192.168.100.8", 00:17:21.621 "trsvcid": "4420" 00:17:21.621 }, 00:17:21.621 "peer_address": { 00:17:21.621 "trtype": "RDMA", 00:17:21.621 "adrfam": "IPv4", 00:17:21.622 "traddr": "192.168.100.8", 00:17:21.622 "trsvcid": "48566" 00:17:21.622 }, 00:17:21.622 "auth": { 00:17:21.622 "state": "completed", 00:17:21.622 "digest": "sha256", 00:17:21.622 "dhgroup": "ffdhe6144" 00:17:21.622 } 00:17:21.622 } 00:17:21.622 ]' 00:17:21.622 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.622 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.622 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.622 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.622 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.880 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.880 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.880 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.881 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:22.448 07:22:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.707 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.965 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.223 00:17:23.223 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.223 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.223 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.481 { 00:17:23.481 "cntlid": 35, 00:17:23.481 "qid": 0, 00:17:23.481 "state": "enabled", 00:17:23.481 "thread": "nvmf_tgt_poll_group_000", 00:17:23.481 "listen_address": { 00:17:23.481 "trtype": "RDMA", 00:17:23.481 "adrfam": "IPv4", 00:17:23.481 "traddr": "192.168.100.8", 00:17:23.481 "trsvcid": "4420" 00:17:23.481 }, 00:17:23.481 "peer_address": { 00:17:23.481 "trtype": "RDMA", 00:17:23.481 "adrfam": "IPv4", 00:17:23.481 "traddr": "192.168.100.8", 00:17:23.481 "trsvcid": "54445" 00:17:23.481 }, 00:17:23.481 "auth": { 00:17:23.481 "state": "completed", 00:17:23.481 "digest": "sha256", 00:17:23.481 "dhgroup": "ffdhe6144" 00:17:23.481 } 00:17:23.481 } 00:17:23.481 ]' 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.481 07:22:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.740 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:24.306 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.306 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.307 07:22:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.564 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.565 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.823 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.082 { 00:17:25.082 "cntlid": 37, 00:17:25.082 "qid": 0, 00:17:25.082 "state": "enabled", 00:17:25.082 "thread": "nvmf_tgt_poll_group_000", 00:17:25.082 "listen_address": { 00:17:25.082 "trtype": "RDMA", 00:17:25.082 "adrfam": "IPv4", 00:17:25.082 "traddr": "192.168.100.8", 00:17:25.082 "trsvcid": "4420" 00:17:25.082 }, 00:17:25.082 "peer_address": { 00:17:25.082 "trtype": "RDMA", 00:17:25.082 "adrfam": "IPv4", 00:17:25.082 "traddr": "192.168.100.8", 00:17:25.082 "trsvcid": "53947" 00:17:25.082 }, 00:17:25.082 "auth": { 00:17:25.082 "state": "completed", 00:17:25.082 "digest": "sha256", 00:17:25.082 "dhgroup": "ffdhe6144" 00:17:25.082 } 00:17:25.082 } 00:17:25.082 ]' 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.082 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.344 07:22:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:25.911 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.170 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:26.170 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.170 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.170 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.170 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.171 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:26.171 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.430 07:22:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.756 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.756 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.756 { 00:17:26.756 "cntlid": 39, 00:17:26.756 "qid": 0, 00:17:26.756 "state": "enabled", 00:17:26.756 "thread": "nvmf_tgt_poll_group_000", 00:17:26.756 "listen_address": { 00:17:26.756 "trtype": "RDMA", 00:17:26.756 "adrfam": "IPv4", 00:17:26.756 "traddr": "192.168.100.8", 00:17:26.756 "trsvcid": "4420" 00:17:26.756 }, 00:17:26.756 "peer_address": { 00:17:26.756 "trtype": "RDMA", 00:17:26.756 "adrfam": "IPv4", 00:17:26.756 "traddr": "192.168.100.8", 00:17:26.756 "trsvcid": "33733" 00:17:26.756 }, 00:17:26.756 "auth": { 00:17:26.756 "state": "completed", 00:17:26.756 "digest": "sha256", 00:17:26.756 "dhgroup": "ffdhe6144" 00:17:26.756 } 00:17:26.756 } 00:17:26.756 ]' 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.019 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.279 07:22:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.848 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.108 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.676 00:17:28.676 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.676 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.676 07:23:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.676 { 00:17:28.676 "cntlid": 41, 00:17:28.676 "qid": 0, 00:17:28.676 "state": "enabled", 00:17:28.676 "thread": "nvmf_tgt_poll_group_000", 00:17:28.676 "listen_address": { 00:17:28.676 "trtype": "RDMA", 00:17:28.676 "adrfam": "IPv4", 00:17:28.676 "traddr": "192.168.100.8", 00:17:28.676 "trsvcid": "4420" 00:17:28.676 }, 00:17:28.676 "peer_address": { 00:17:28.676 "trtype": "RDMA", 00:17:28.676 "adrfam": "IPv4", 00:17:28.676 "traddr": "192.168.100.8", 00:17:28.676 "trsvcid": "32912" 00:17:28.676 }, 00:17:28.676 "auth": { 00:17:28.676 "state": "completed", 00:17:28.676 "digest": "sha256", 00:17:28.676 "dhgroup": "ffdhe8192" 00:17:28.676 } 00:17:28.676 } 00:17:28.676 ]' 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.676 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.935 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.935 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.935 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.935 07:23:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.873 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.441 00:17:30.442 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.442 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.442 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.708 { 00:17:30.708 "cntlid": 43, 00:17:30.708 "qid": 0, 00:17:30.708 "state": "enabled", 00:17:30.708 "thread": "nvmf_tgt_poll_group_000", 00:17:30.708 "listen_address": { 00:17:30.708 "trtype": "RDMA", 00:17:30.708 "adrfam": "IPv4", 00:17:30.708 "traddr": "192.168.100.8", 00:17:30.708 "trsvcid": "4420" 00:17:30.708 }, 00:17:30.708 "peer_address": { 00:17:30.708 "trtype": "RDMA", 00:17:30.708 "adrfam": "IPv4", 00:17:30.708 "traddr": "192.168.100.8", 00:17:30.708 "trsvcid": "48466" 00:17:30.708 }, 00:17:30.708 "auth": { 00:17:30.708 "state": "completed", 00:17:30.708 "digest": "sha256", 00:17:30.708 "dhgroup": "ffdhe8192" 00:17:30.708 } 00:17:30.708 } 00:17:30.708 ]' 00:17:30.708 07:23:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.708 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.969 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:31.536 07:23:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.536 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.796 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.364 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.364 { 00:17:32.364 "cntlid": 45, 00:17:32.364 "qid": 0, 00:17:32.364 "state": "enabled", 00:17:32.364 "thread": "nvmf_tgt_poll_group_000", 00:17:32.364 "listen_address": { 00:17:32.364 "trtype": "RDMA", 00:17:32.364 "adrfam": "IPv4", 00:17:32.364 "traddr": "192.168.100.8", 00:17:32.364 "trsvcid": "4420" 00:17:32.364 }, 00:17:32.364 "peer_address": { 00:17:32.364 "trtype": "RDMA", 00:17:32.364 "adrfam": "IPv4", 00:17:32.364 "traddr": "192.168.100.8", 00:17:32.364 "trsvcid": "57082" 00:17:32.364 }, 00:17:32.364 "auth": { 00:17:32.364 "state": "completed", 00:17:32.364 "digest": "sha256", 00:17:32.364 "dhgroup": "ffdhe8192" 00:17:32.364 } 00:17:32.364 } 00:17:32.364 ]' 00:17:32.364 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.624 07:23:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.882 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.451 07:23:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.710 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.279 00:17:34.279 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.279 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.280 { 00:17:34.280 "cntlid": 47, 00:17:34.280 "qid": 0, 00:17:34.280 "state": "enabled", 00:17:34.280 "thread": "nvmf_tgt_poll_group_000", 00:17:34.280 "listen_address": { 00:17:34.280 "trtype": "RDMA", 00:17:34.280 "adrfam": "IPv4", 00:17:34.280 "traddr": "192.168.100.8", 00:17:34.280 "trsvcid": "4420" 00:17:34.280 }, 00:17:34.280 "peer_address": { 00:17:34.280 "trtype": "RDMA", 00:17:34.280 "adrfam": "IPv4", 00:17:34.280 "traddr": "192.168.100.8", 00:17:34.280 "trsvcid": "45287" 00:17:34.280 }, 00:17:34.280 "auth": { 00:17:34.280 "state": "completed", 00:17:34.280 "digest": "sha256", 00:17:34.280 "dhgroup": "ffdhe8192" 00:17:34.280 } 00:17:34.280 } 00:17:34.280 ]' 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.280 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.539 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.539 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.539 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.539 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.539 07:23:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.539 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:35.477 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.477 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:35.477 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.477 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.478 07:23:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.737 00:17:35.737 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.737 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.737 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.996 { 00:17:35.996 "cntlid": 49, 00:17:35.996 "qid": 0, 00:17:35.996 "state": "enabled", 00:17:35.996 "thread": "nvmf_tgt_poll_group_000", 00:17:35.996 "listen_address": { 00:17:35.996 "trtype": "RDMA", 00:17:35.996 "adrfam": "IPv4", 00:17:35.996 "traddr": "192.168.100.8", 00:17:35.996 "trsvcid": "4420" 00:17:35.996 }, 00:17:35.996 "peer_address": { 00:17:35.996 "trtype": "RDMA", 00:17:35.996 "adrfam": "IPv4", 00:17:35.996 "traddr": "192.168.100.8", 00:17:35.996 "trsvcid": "45917" 00:17:35.996 }, 00:17:35.996 "auth": { 00:17:35.996 "state": "completed", 00:17:35.996 "digest": "sha384", 00:17:35.996 "dhgroup": "null" 00:17:35.996 } 00:17:35.996 } 00:17:35.996 ]' 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.996 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.256 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.256 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.256 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.256 07:23:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:36.824 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.083 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.343 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.602 00:17:37.602 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.602 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.602 07:23:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.602 { 00:17:37.602 "cntlid": 51, 00:17:37.602 "qid": 0, 00:17:37.602 "state": "enabled", 00:17:37.602 "thread": "nvmf_tgt_poll_group_000", 00:17:37.602 "listen_address": { 00:17:37.602 "trtype": "RDMA", 00:17:37.602 "adrfam": "IPv4", 00:17:37.602 "traddr": "192.168.100.8", 00:17:37.602 "trsvcid": "4420" 00:17:37.602 }, 00:17:37.602 "peer_address": { 00:17:37.602 "trtype": "RDMA", 00:17:37.602 "adrfam": "IPv4", 00:17:37.602 "traddr": "192.168.100.8", 00:17:37.602 "trsvcid": "44019" 00:17:37.602 }, 00:17:37.602 "auth": { 00:17:37.602 "state": "completed", 00:17:37.602 "digest": "sha384", 00:17:37.602 "dhgroup": "null" 00:17:37.602 } 00:17:37.602 } 00:17:37.602 ]' 00:17:37.602 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.862 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.122 07:23:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.690 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.948 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.949 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.949 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.949 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.949 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.207 00:17:39.207 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.207 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.207 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.467 { 00:17:39.467 "cntlid": 53, 00:17:39.467 "qid": 0, 00:17:39.467 "state": "enabled", 00:17:39.467 "thread": "nvmf_tgt_poll_group_000", 00:17:39.467 "listen_address": { 00:17:39.467 "trtype": "RDMA", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "192.168.100.8", 00:17:39.467 "trsvcid": "4420" 00:17:39.467 }, 00:17:39.467 "peer_address": { 00:17:39.467 "trtype": "RDMA", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "192.168.100.8", 00:17:39.467 "trsvcid": "55911" 00:17:39.467 }, 00:17:39.467 "auth": { 00:17:39.467 "state": "completed", 00:17:39.467 "digest": "sha384", 00:17:39.467 "dhgroup": "null" 00:17:39.467 } 00:17:39.467 } 00:17:39.467 ]' 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.467 07:23:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.726 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.304 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.619 07:23:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.878 00:17:40.878 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.878 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.878 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.136 { 00:17:41.136 "cntlid": 55, 00:17:41.136 "qid": 0, 00:17:41.136 "state": "enabled", 00:17:41.136 "thread": "nvmf_tgt_poll_group_000", 00:17:41.136 "listen_address": { 00:17:41.136 "trtype": "RDMA", 00:17:41.136 "adrfam": "IPv4", 00:17:41.136 "traddr": "192.168.100.8", 00:17:41.136 "trsvcid": "4420" 00:17:41.136 }, 00:17:41.136 "peer_address": { 00:17:41.136 "trtype": "RDMA", 00:17:41.136 "adrfam": "IPv4", 00:17:41.136 "traddr": "192.168.100.8", 00:17:41.136 "trsvcid": "60904" 00:17:41.136 }, 00:17:41.136 "auth": { 00:17:41.136 "state": "completed", 00:17:41.136 "digest": "sha384", 00:17:41.136 "dhgroup": "null" 00:17:41.136 } 00:17:41.136 } 00:17:41.136 ]' 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.136 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.395 07:23:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.962 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.220 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.221 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.221 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.479 00:17:42.479 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.479 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.479 07:23:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.738 { 00:17:42.738 "cntlid": 57, 00:17:42.738 "qid": 0, 00:17:42.738 "state": "enabled", 00:17:42.738 "thread": "nvmf_tgt_poll_group_000", 00:17:42.738 "listen_address": { 00:17:42.738 "trtype": "RDMA", 00:17:42.738 "adrfam": "IPv4", 00:17:42.738 "traddr": "192.168.100.8", 00:17:42.738 "trsvcid": "4420" 00:17:42.738 }, 00:17:42.738 "peer_address": { 00:17:42.738 "trtype": "RDMA", 00:17:42.738 "adrfam": "IPv4", 00:17:42.738 "traddr": "192.168.100.8", 00:17:42.738 "trsvcid": "36491" 00:17:42.738 }, 00:17:42.738 "auth": { 00:17:42.738 "state": "completed", 00:17:42.738 "digest": "sha384", 00:17:42.738 "dhgroup": "ffdhe2048" 00:17:42.738 } 00:17:42.738 } 00:17:42.738 ]' 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.738 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.997 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:43.565 07:23:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.565 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.824 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.082 00:17:44.082 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.082 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.082 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.341 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.341 { 00:17:44.341 "cntlid": 59, 00:17:44.341 "qid": 0, 00:17:44.341 "state": "enabled", 00:17:44.341 "thread": "nvmf_tgt_poll_group_000", 00:17:44.341 "listen_address": { 00:17:44.342 "trtype": "RDMA", 00:17:44.342 "adrfam": "IPv4", 00:17:44.342 "traddr": "192.168.100.8", 00:17:44.342 "trsvcid": "4420" 00:17:44.342 }, 00:17:44.342 "peer_address": { 00:17:44.342 "trtype": "RDMA", 00:17:44.342 "adrfam": "IPv4", 00:17:44.342 "traddr": "192.168.100.8", 00:17:44.342 "trsvcid": "45163" 00:17:44.342 }, 00:17:44.342 "auth": { 00:17:44.342 "state": "completed", 00:17:44.342 "digest": "sha384", 00:17:44.342 "dhgroup": "ffdhe2048" 00:17:44.342 } 00:17:44.342 } 00:17:44.342 ]' 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.342 07:23:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.601 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:45.169 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.428 07:23:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.687 00:17:45.687 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.687 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.687 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.946 { 00:17:45.946 "cntlid": 61, 00:17:45.946 "qid": 0, 00:17:45.946 "state": "enabled", 00:17:45.946 "thread": "nvmf_tgt_poll_group_000", 00:17:45.946 "listen_address": { 00:17:45.946 "trtype": "RDMA", 00:17:45.946 "adrfam": "IPv4", 00:17:45.946 "traddr": "192.168.100.8", 00:17:45.946 "trsvcid": "4420" 00:17:45.946 }, 00:17:45.946 "peer_address": { 00:17:45.946 "trtype": "RDMA", 00:17:45.946 "adrfam": "IPv4", 00:17:45.946 "traddr": "192.168.100.8", 00:17:45.946 "trsvcid": "60228" 00:17:45.946 }, 00:17:45.946 "auth": { 00:17:45.946 "state": "completed", 00:17:45.946 "digest": "sha384", 00:17:45.946 "dhgroup": "ffdhe2048" 00:17:45.946 } 00:17:45.946 } 00:17:45.946 ]' 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.946 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.205 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.205 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.205 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.205 07:23:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:46.773 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.032 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.291 00:17:47.291 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.291 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.291 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.550 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.550 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.550 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.550 07:23:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.550 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.550 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.550 { 00:17:47.550 "cntlid": 63, 00:17:47.550 "qid": 0, 00:17:47.550 "state": "enabled", 00:17:47.550 "thread": "nvmf_tgt_poll_group_000", 00:17:47.550 "listen_address": { 00:17:47.550 "trtype": "RDMA", 00:17:47.550 "adrfam": "IPv4", 00:17:47.550 "traddr": "192.168.100.8", 00:17:47.550 "trsvcid": "4420" 00:17:47.550 }, 00:17:47.550 "peer_address": { 00:17:47.550 "trtype": "RDMA", 00:17:47.550 "adrfam": "IPv4", 00:17:47.550 "traddr": "192.168.100.8", 00:17:47.550 "trsvcid": "34122" 00:17:47.550 }, 00:17:47.550 "auth": { 00:17:47.550 "state": "completed", 00:17:47.550 "digest": "sha384", 00:17:47.550 "dhgroup": "ffdhe2048" 00:17:47.550 } 00:17:47.550 } 00:17:47.550 ]' 00:17:47.550 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.550 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.550 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.810 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:48.378 07:23:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.637 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.897 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.156 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.157 { 00:17:49.157 "cntlid": 65, 00:17:49.157 "qid": 0, 00:17:49.157 "state": "enabled", 00:17:49.157 "thread": "nvmf_tgt_poll_group_000", 00:17:49.157 "listen_address": { 00:17:49.157 "trtype": "RDMA", 00:17:49.157 "adrfam": "IPv4", 00:17:49.157 "traddr": "192.168.100.8", 00:17:49.157 "trsvcid": "4420" 00:17:49.157 }, 00:17:49.157 "peer_address": { 00:17:49.157 "trtype": "RDMA", 00:17:49.157 "adrfam": "IPv4", 00:17:49.157 "traddr": "192.168.100.8", 00:17:49.157 "trsvcid": "38148" 00:17:49.157 }, 00:17:49.157 "auth": { 00:17:49.157 "state": "completed", 00:17:49.157 "digest": "sha384", 00:17:49.157 "dhgroup": "ffdhe3072" 00:17:49.157 } 00:17:49.157 } 00:17:49.157 ]' 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.157 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.416 07:23:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.353 07:23:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.612 00:17:50.613 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.613 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.613 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.872 { 00:17:50.872 "cntlid": 67, 00:17:50.872 "qid": 0, 00:17:50.872 "state": "enabled", 00:17:50.872 "thread": "nvmf_tgt_poll_group_000", 00:17:50.872 "listen_address": { 00:17:50.872 "trtype": "RDMA", 00:17:50.872 "adrfam": "IPv4", 00:17:50.872 "traddr": "192.168.100.8", 00:17:50.872 "trsvcid": "4420" 00:17:50.872 }, 00:17:50.872 "peer_address": { 00:17:50.872 "trtype": "RDMA", 00:17:50.872 "adrfam": "IPv4", 00:17:50.872 "traddr": "192.168.100.8", 00:17:50.872 "trsvcid": "38264" 00:17:50.872 }, 00:17:50.872 "auth": { 00:17:50.872 "state": "completed", 00:17:50.872 "digest": "sha384", 00:17:50.872 "dhgroup": "ffdhe3072" 00:17:50.872 } 00:17:50.872 } 00:17:50.872 ]' 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.872 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.130 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.130 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.130 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.130 07:23:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.068 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.327 00:17:52.327 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.327 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.327 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.587 { 00:17:52.587 "cntlid": 69, 00:17:52.587 "qid": 0, 00:17:52.587 "state": "enabled", 00:17:52.587 "thread": "nvmf_tgt_poll_group_000", 00:17:52.587 "listen_address": { 00:17:52.587 "trtype": "RDMA", 00:17:52.587 "adrfam": "IPv4", 00:17:52.587 "traddr": "192.168.100.8", 00:17:52.587 "trsvcid": "4420" 00:17:52.587 }, 00:17:52.587 "peer_address": { 00:17:52.587 "trtype": "RDMA", 00:17:52.587 "adrfam": "IPv4", 00:17:52.587 "traddr": "192.168.100.8", 00:17:52.587 "trsvcid": "48186" 00:17:52.587 }, 00:17:52.587 "auth": { 00:17:52.587 "state": "completed", 00:17:52.587 "digest": "sha384", 00:17:52.587 "dhgroup": "ffdhe3072" 00:17:52.587 } 00:17:52.587 } 00:17:52.587 ]' 00:17:52.587 07:23:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.587 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.846 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:17:53.414 07:23:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.704 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.963 00:17:53.963 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.963 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.963 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.223 { 00:17:54.223 "cntlid": 71, 00:17:54.223 "qid": 0, 00:17:54.223 "state": "enabled", 00:17:54.223 "thread": "nvmf_tgt_poll_group_000", 00:17:54.223 "listen_address": { 00:17:54.223 "trtype": "RDMA", 00:17:54.223 "adrfam": "IPv4", 00:17:54.223 "traddr": "192.168.100.8", 00:17:54.223 "trsvcid": "4420" 00:17:54.223 }, 00:17:54.223 "peer_address": { 00:17:54.223 "trtype": "RDMA", 00:17:54.223 "adrfam": "IPv4", 00:17:54.223 "traddr": "192.168.100.8", 00:17:54.223 "trsvcid": "34949" 00:17:54.223 }, 00:17:54.223 "auth": { 00:17:54.223 "state": "completed", 00:17:54.223 "digest": "sha384", 00:17:54.223 "dhgroup": "ffdhe3072" 00:17:54.223 } 00:17:54.223 } 00:17:54.223 ]' 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.223 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.482 07:23:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:17:55.051 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.317 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.577 07:23:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.837 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.837 { 00:17:55.837 "cntlid": 73, 00:17:55.837 "qid": 0, 00:17:55.837 "state": "enabled", 00:17:55.837 "thread": "nvmf_tgt_poll_group_000", 00:17:55.837 "listen_address": { 00:17:55.837 "trtype": "RDMA", 00:17:55.837 "adrfam": "IPv4", 00:17:55.837 "traddr": "192.168.100.8", 00:17:55.837 "trsvcid": "4420" 00:17:55.837 }, 00:17:55.837 "peer_address": { 00:17:55.837 "trtype": "RDMA", 00:17:55.837 "adrfam": "IPv4", 00:17:55.837 "traddr": "192.168.100.8", 00:17:55.837 "trsvcid": "56370" 00:17:55.837 }, 00:17:55.837 "auth": { 00:17:55.837 "state": "completed", 00:17:55.837 "digest": "sha384", 00:17:55.837 "dhgroup": "ffdhe4096" 00:17:55.837 } 00:17:55.837 } 00:17:55.837 ]' 00:17:55.837 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.096 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.355 07:23:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.924 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.183 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.442 00:17:57.442 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.442 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.442 07:23:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.702 { 00:17:57.702 "cntlid": 75, 00:17:57.702 "qid": 0, 00:17:57.702 "state": "enabled", 00:17:57.702 "thread": "nvmf_tgt_poll_group_000", 00:17:57.702 "listen_address": { 00:17:57.702 "trtype": "RDMA", 00:17:57.702 "adrfam": "IPv4", 00:17:57.702 "traddr": "192.168.100.8", 00:17:57.702 "trsvcid": "4420" 00:17:57.702 }, 00:17:57.702 "peer_address": { 00:17:57.702 "trtype": "RDMA", 00:17:57.702 "adrfam": "IPv4", 00:17:57.702 "traddr": "192.168.100.8", 00:17:57.702 "trsvcid": "55704" 00:17:57.702 }, 00:17:57.702 "auth": { 00:17:57.702 "state": "completed", 00:17:57.702 "digest": "sha384", 00:17:57.702 "dhgroup": "ffdhe4096" 00:17:57.702 } 00:17:57.702 } 00:17:57.702 ]' 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.702 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.961 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:17:58.530 07:23:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.790 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.049 00:17:59.049 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.049 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.049 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.308 { 00:17:59.308 "cntlid": 77, 00:17:59.308 "qid": 0, 00:17:59.308 "state": "enabled", 00:17:59.308 "thread": "nvmf_tgt_poll_group_000", 00:17:59.308 "listen_address": { 00:17:59.308 "trtype": "RDMA", 00:17:59.308 "adrfam": "IPv4", 00:17:59.308 "traddr": "192.168.100.8", 00:17:59.308 "trsvcid": "4420" 00:17:59.308 }, 00:17:59.308 "peer_address": { 00:17:59.308 "trtype": "RDMA", 00:17:59.308 "adrfam": "IPv4", 00:17:59.308 "traddr": "192.168.100.8", 00:17:59.308 "trsvcid": "53240" 00:17:59.308 }, 00:17:59.308 "auth": { 00:17:59.308 "state": "completed", 00:17:59.308 "digest": "sha384", 00:17:59.308 "dhgroup": "ffdhe4096" 00:17:59.308 } 00:17:59.308 } 00:17:59.308 ]' 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.308 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.567 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.567 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.567 07:23:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.567 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:00.135 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:00.395 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.655 07:23:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.914 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.914 { 00:18:00.914 "cntlid": 79, 00:18:00.914 "qid": 0, 00:18:00.914 "state": "enabled", 00:18:00.914 "thread": "nvmf_tgt_poll_group_000", 00:18:00.914 "listen_address": { 00:18:00.914 "trtype": "RDMA", 00:18:00.914 "adrfam": "IPv4", 00:18:00.914 "traddr": "192.168.100.8", 00:18:00.914 "trsvcid": "4420" 00:18:00.914 }, 00:18:00.914 "peer_address": { 00:18:00.914 "trtype": "RDMA", 00:18:00.914 "adrfam": "IPv4", 00:18:00.914 "traddr": "192.168.100.8", 00:18:00.914 "trsvcid": "36835" 00:18:00.914 }, 00:18:00.914 "auth": { 00:18:00.914 "state": "completed", 00:18:00.914 "digest": "sha384", 00:18:00.914 "dhgroup": "ffdhe4096" 00:18:00.914 } 00:18:00.914 } 00:18:00.914 ]' 00:18:00.914 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.173 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.432 07:23:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.001 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.261 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.520 00:18:02.520 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.520 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.520 07:23:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.779 { 00:18:02.779 "cntlid": 81, 00:18:02.779 "qid": 0, 00:18:02.779 "state": "enabled", 00:18:02.779 "thread": "nvmf_tgt_poll_group_000", 00:18:02.779 "listen_address": { 00:18:02.779 "trtype": "RDMA", 00:18:02.779 "adrfam": "IPv4", 00:18:02.779 "traddr": "192.168.100.8", 00:18:02.779 "trsvcid": "4420" 00:18:02.779 }, 00:18:02.779 "peer_address": { 00:18:02.779 "trtype": "RDMA", 00:18:02.779 "adrfam": "IPv4", 00:18:02.779 "traddr": "192.168.100.8", 00:18:02.779 "trsvcid": "58868" 00:18:02.779 }, 00:18:02.779 "auth": { 00:18:02.779 "state": "completed", 00:18:02.779 "digest": "sha384", 00:18:02.779 "dhgroup": "ffdhe6144" 00:18:02.779 } 00:18:02.779 } 00:18:02.779 ]' 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.779 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.038 07:23:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:03.606 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.865 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.866 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.434 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.434 { 00:18:04.434 "cntlid": 83, 00:18:04.434 "qid": 0, 00:18:04.434 "state": "enabled", 00:18:04.434 "thread": "nvmf_tgt_poll_group_000", 00:18:04.434 "listen_address": { 00:18:04.434 "trtype": "RDMA", 00:18:04.434 "adrfam": "IPv4", 00:18:04.434 "traddr": "192.168.100.8", 00:18:04.434 "trsvcid": "4420" 00:18:04.434 }, 00:18:04.434 "peer_address": { 00:18:04.434 "trtype": "RDMA", 00:18:04.434 "adrfam": "IPv4", 00:18:04.434 "traddr": "192.168.100.8", 00:18:04.434 "trsvcid": "53747" 00:18:04.434 }, 00:18:04.434 "auth": { 00:18:04.434 "state": "completed", 00:18:04.434 "digest": "sha384", 00:18:04.434 "dhgroup": "ffdhe6144" 00:18:04.434 } 00:18:04.434 } 00:18:04.434 ]' 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.434 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.435 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.694 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.694 07:23:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.694 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.694 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.694 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.694 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:05.262 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.520 07:23:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.780 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.039 00:18:06.039 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.039 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.039 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.298 { 00:18:06.298 "cntlid": 85, 00:18:06.298 "qid": 0, 00:18:06.298 "state": "enabled", 00:18:06.298 "thread": "nvmf_tgt_poll_group_000", 00:18:06.298 "listen_address": { 00:18:06.298 "trtype": "RDMA", 00:18:06.298 "adrfam": "IPv4", 00:18:06.298 "traddr": "192.168.100.8", 00:18:06.298 "trsvcid": "4420" 00:18:06.298 }, 00:18:06.298 "peer_address": { 00:18:06.298 "trtype": "RDMA", 00:18:06.298 "adrfam": "IPv4", 00:18:06.298 "traddr": "192.168.100.8", 00:18:06.298 "trsvcid": "52739" 00:18:06.298 }, 00:18:06.298 "auth": { 00:18:06.298 "state": "completed", 00:18:06.298 "digest": "sha384", 00:18:06.298 "dhgroup": "ffdhe6144" 00:18:06.298 } 00:18:06.298 } 00:18:06.298 ]' 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.298 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.557 07:23:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:07.162 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.162 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:07.162 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.162 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.421 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.422 07:23:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.990 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.990 { 00:18:07.990 "cntlid": 87, 00:18:07.990 "qid": 0, 00:18:07.990 "state": "enabled", 00:18:07.990 "thread": "nvmf_tgt_poll_group_000", 00:18:07.990 "listen_address": { 00:18:07.990 "trtype": "RDMA", 00:18:07.990 "adrfam": "IPv4", 00:18:07.990 "traddr": "192.168.100.8", 00:18:07.990 "trsvcid": "4420" 00:18:07.990 }, 00:18:07.990 "peer_address": { 00:18:07.990 "trtype": "RDMA", 00:18:07.990 "adrfam": "IPv4", 00:18:07.990 "traddr": "192.168.100.8", 00:18:07.990 "trsvcid": "43490" 00:18:07.990 }, 00:18:07.990 "auth": { 00:18:07.990 "state": "completed", 00:18:07.990 "digest": "sha384", 00:18:07.990 "dhgroup": "ffdhe6144" 00:18:07.990 } 00:18:07.990 } 00:18:07.990 ]' 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.990 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.249 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.249 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.249 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.249 07:23:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.186 07:23:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.754 00:18:09.754 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.754 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.754 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.013 { 00:18:10.013 "cntlid": 89, 00:18:10.013 "qid": 0, 00:18:10.013 "state": "enabled", 00:18:10.013 "thread": "nvmf_tgt_poll_group_000", 00:18:10.013 "listen_address": { 00:18:10.013 "trtype": "RDMA", 00:18:10.013 "adrfam": "IPv4", 00:18:10.013 "traddr": "192.168.100.8", 00:18:10.013 "trsvcid": "4420" 00:18:10.013 }, 00:18:10.013 "peer_address": { 00:18:10.013 "trtype": "RDMA", 00:18:10.013 "adrfam": "IPv4", 00:18:10.013 "traddr": "192.168.100.8", 00:18:10.013 "trsvcid": "37342" 00:18:10.013 }, 00:18:10.013 "auth": { 00:18:10.013 "state": "completed", 00:18:10.013 "digest": "sha384", 00:18:10.013 "dhgroup": "ffdhe8192" 00:18:10.013 } 00:18:10.013 } 00:18:10.013 ]' 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.013 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.273 07:23:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:10.843 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.101 07:23:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.667 00:18:11.667 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.667 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.667 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.925 { 00:18:11.925 "cntlid": 91, 00:18:11.925 "qid": 0, 00:18:11.925 "state": "enabled", 00:18:11.925 "thread": "nvmf_tgt_poll_group_000", 00:18:11.925 "listen_address": { 00:18:11.925 "trtype": "RDMA", 00:18:11.925 "adrfam": "IPv4", 00:18:11.925 "traddr": "192.168.100.8", 00:18:11.925 "trsvcid": "4420" 00:18:11.925 }, 00:18:11.925 "peer_address": { 00:18:11.925 "trtype": "RDMA", 00:18:11.925 "adrfam": "IPv4", 00:18:11.925 "traddr": "192.168.100.8", 00:18:11.925 "trsvcid": "60397" 00:18:11.925 }, 00:18:11.925 "auth": { 00:18:11.925 "state": "completed", 00:18:11.925 "digest": "sha384", 00:18:11.925 "dhgroup": "ffdhe8192" 00:18:11.925 } 00:18:11.925 } 00:18:11.925 ]' 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.925 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.926 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.926 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.926 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.926 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.926 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.184 07:23:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:12.753 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.753 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:12.753 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.753 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.013 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.580 00:18:13.580 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.580 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.580 07:23:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.580 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.580 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.580 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.580 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.580 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.839 { 00:18:13.839 "cntlid": 93, 00:18:13.839 "qid": 0, 00:18:13.839 "state": "enabled", 00:18:13.839 "thread": "nvmf_tgt_poll_group_000", 00:18:13.839 "listen_address": { 00:18:13.839 "trtype": "RDMA", 00:18:13.839 "adrfam": "IPv4", 00:18:13.839 "traddr": "192.168.100.8", 00:18:13.839 "trsvcid": "4420" 00:18:13.839 }, 00:18:13.839 "peer_address": { 00:18:13.839 "trtype": "RDMA", 00:18:13.839 "adrfam": "IPv4", 00:18:13.839 "traddr": "192.168.100.8", 00:18:13.839 "trsvcid": "50000" 00:18:13.839 }, 00:18:13.839 "auth": { 00:18:13.839 "state": "completed", 00:18:13.839 "digest": "sha384", 00:18:13.839 "dhgroup": "ffdhe8192" 00:18:13.839 } 00:18:13.839 } 00:18:13.839 ]' 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.839 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.098 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:14.666 07:23:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.666 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.925 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.494 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.494 { 00:18:15.494 "cntlid": 95, 00:18:15.494 "qid": 0, 00:18:15.494 "state": "enabled", 00:18:15.494 "thread": "nvmf_tgt_poll_group_000", 00:18:15.494 "listen_address": { 00:18:15.494 "trtype": "RDMA", 00:18:15.494 "adrfam": "IPv4", 00:18:15.494 "traddr": "192.168.100.8", 00:18:15.494 "trsvcid": "4420" 00:18:15.494 }, 00:18:15.494 "peer_address": { 00:18:15.494 "trtype": "RDMA", 00:18:15.494 "adrfam": "IPv4", 00:18:15.494 "traddr": "192.168.100.8", 00:18:15.494 "trsvcid": "40284" 00:18:15.494 }, 00:18:15.494 "auth": { 00:18:15.494 "state": "completed", 00:18:15.494 "digest": "sha384", 00:18:15.494 "dhgroup": "ffdhe8192" 00:18:15.494 } 00:18:15.494 } 00:18:15.494 ]' 00:18:15.494 07:23:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.494 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.494 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.753 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.753 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.753 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.753 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.753 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.012 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.579 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:16.580 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.580 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.580 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.580 07:23:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.839 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.098 00:18:17.098 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.098 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.098 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.098 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.358 { 00:18:17.358 "cntlid": 97, 00:18:17.358 "qid": 0, 00:18:17.358 "state": "enabled", 00:18:17.358 "thread": "nvmf_tgt_poll_group_000", 00:18:17.358 "listen_address": { 00:18:17.358 "trtype": "RDMA", 00:18:17.358 "adrfam": "IPv4", 00:18:17.358 "traddr": "192.168.100.8", 00:18:17.358 "trsvcid": "4420" 00:18:17.358 }, 00:18:17.358 "peer_address": { 00:18:17.358 "trtype": "RDMA", 00:18:17.358 "adrfam": "IPv4", 00:18:17.358 "traddr": "192.168.100.8", 00:18:17.358 "trsvcid": "58069" 00:18:17.358 }, 00:18:17.358 "auth": { 00:18:17.358 "state": "completed", 00:18:17.358 "digest": "sha512", 00:18:17.358 "dhgroup": "null" 00:18:17.358 } 00:18:17.358 } 00:18:17.358 ]' 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.358 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.617 07:23:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.184 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.443 07:23:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.702 00:18:18.702 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.702 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.702 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.961 { 00:18:18.961 "cntlid": 99, 00:18:18.961 "qid": 0, 00:18:18.961 "state": "enabled", 00:18:18.961 "thread": "nvmf_tgt_poll_group_000", 00:18:18.961 "listen_address": { 00:18:18.961 "trtype": "RDMA", 00:18:18.961 "adrfam": "IPv4", 00:18:18.961 "traddr": "192.168.100.8", 00:18:18.961 "trsvcid": "4420" 00:18:18.961 }, 00:18:18.961 "peer_address": { 00:18:18.961 "trtype": "RDMA", 00:18:18.961 "adrfam": "IPv4", 00:18:18.961 "traddr": "192.168.100.8", 00:18:18.961 "trsvcid": "60444" 00:18:18.961 }, 00:18:18.961 "auth": { 00:18:18.961 "state": "completed", 00:18:18.961 "digest": "sha512", 00:18:18.961 "dhgroup": "null" 00:18:18.961 } 00:18:18.961 } 00:18:18.961 ]' 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.961 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.962 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.962 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.221 07:23:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:19.789 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.789 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:19.789 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.789 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.074 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.375 00:18:20.375 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.375 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.375 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.635 { 00:18:20.635 "cntlid": 101, 00:18:20.635 "qid": 0, 00:18:20.635 "state": "enabled", 00:18:20.635 "thread": "nvmf_tgt_poll_group_000", 00:18:20.635 "listen_address": { 00:18:20.635 "trtype": "RDMA", 00:18:20.635 "adrfam": "IPv4", 00:18:20.635 "traddr": "192.168.100.8", 00:18:20.635 "trsvcid": "4420" 00:18:20.635 }, 00:18:20.635 "peer_address": { 00:18:20.635 "trtype": "RDMA", 00:18:20.635 "adrfam": "IPv4", 00:18:20.635 "traddr": "192.168.100.8", 00:18:20.635 "trsvcid": "55983" 00:18:20.635 }, 00:18:20.635 "auth": { 00:18:20.635 "state": "completed", 00:18:20.635 "digest": "sha512", 00:18:20.635 "dhgroup": "null" 00:18:20.635 } 00:18:20.635 } 00:18:20.635 ]' 00:18:20.635 07:23:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.635 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.894 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.461 07:23:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.720 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.979 00:18:21.979 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.979 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.979 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.239 { 00:18:22.239 "cntlid": 103, 00:18:22.239 "qid": 0, 00:18:22.239 "state": "enabled", 00:18:22.239 "thread": "nvmf_tgt_poll_group_000", 00:18:22.239 "listen_address": { 00:18:22.239 "trtype": "RDMA", 00:18:22.239 "adrfam": "IPv4", 00:18:22.239 "traddr": "192.168.100.8", 00:18:22.239 "trsvcid": "4420" 00:18:22.239 }, 00:18:22.239 "peer_address": { 00:18:22.239 "trtype": "RDMA", 00:18:22.239 "adrfam": "IPv4", 00:18:22.239 "traddr": "192.168.100.8", 00:18:22.239 "trsvcid": "54853" 00:18:22.239 }, 00:18:22.239 "auth": { 00:18:22.239 "state": "completed", 00:18:22.239 "digest": "sha512", 00:18:22.239 "dhgroup": "null" 00:18:22.239 } 00:18:22.239 } 00:18:22.239 ]' 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.239 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.499 07:23:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:23.067 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.330 07:23:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.589 00:18:23.589 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.589 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.589 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.849 { 00:18:23.849 "cntlid": 105, 00:18:23.849 "qid": 0, 00:18:23.849 "state": "enabled", 00:18:23.849 "thread": "nvmf_tgt_poll_group_000", 00:18:23.849 "listen_address": { 00:18:23.849 "trtype": "RDMA", 00:18:23.849 "adrfam": "IPv4", 00:18:23.849 "traddr": "192.168.100.8", 00:18:23.849 "trsvcid": "4420" 00:18:23.849 }, 00:18:23.849 "peer_address": { 00:18:23.849 "trtype": "RDMA", 00:18:23.849 "adrfam": "IPv4", 00:18:23.849 "traddr": "192.168.100.8", 00:18:23.849 "trsvcid": "53629" 00:18:23.849 }, 00:18:23.849 "auth": { 00:18:23.849 "state": "completed", 00:18:23.849 "digest": "sha512", 00:18:23.849 "dhgroup": "ffdhe2048" 00:18:23.849 } 00:18:23.849 } 00:18:23.849 ]' 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.849 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.108 07:23:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:24.675 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.935 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.194 00:18:25.194 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.194 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.194 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.453 { 00:18:25.453 "cntlid": 107, 00:18:25.453 "qid": 0, 00:18:25.453 "state": "enabled", 00:18:25.453 "thread": "nvmf_tgt_poll_group_000", 00:18:25.453 "listen_address": { 00:18:25.453 "trtype": "RDMA", 00:18:25.453 "adrfam": "IPv4", 00:18:25.453 "traddr": "192.168.100.8", 00:18:25.453 "trsvcid": "4420" 00:18:25.453 }, 00:18:25.453 "peer_address": { 00:18:25.453 "trtype": "RDMA", 00:18:25.453 "adrfam": "IPv4", 00:18:25.453 "traddr": "192.168.100.8", 00:18:25.453 "trsvcid": "53346" 00:18:25.453 }, 00:18:25.453 "auth": { 00:18:25.453 "state": "completed", 00:18:25.453 "digest": "sha512", 00:18:25.453 "dhgroup": "ffdhe2048" 00:18:25.453 } 00:18:25.453 } 00:18:25.453 ]' 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.453 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.713 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.713 07:23:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.713 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.713 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.713 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.713 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.650 07:23:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.650 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.909 00:18:26.909 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.909 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.909 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.169 { 00:18:27.169 "cntlid": 109, 00:18:27.169 "qid": 0, 00:18:27.169 "state": "enabled", 00:18:27.169 "thread": "nvmf_tgt_poll_group_000", 00:18:27.169 "listen_address": { 00:18:27.169 "trtype": "RDMA", 00:18:27.169 "adrfam": "IPv4", 00:18:27.169 "traddr": "192.168.100.8", 00:18:27.169 "trsvcid": "4420" 00:18:27.169 }, 00:18:27.169 "peer_address": { 00:18:27.169 "trtype": "RDMA", 00:18:27.169 "adrfam": "IPv4", 00:18:27.169 "traddr": "192.168.100.8", 00:18:27.169 "trsvcid": "48513" 00:18:27.169 }, 00:18:27.169 "auth": { 00:18:27.169 "state": "completed", 00:18:27.169 "digest": "sha512", 00:18:27.169 "dhgroup": "ffdhe2048" 00:18:27.169 } 00:18:27.169 } 00:18:27.169 ]' 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.169 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.428 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.428 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.428 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.428 07:23:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:27.995 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.254 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.523 07:24:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.523 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.782 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.782 { 00:18:28.782 "cntlid": 111, 00:18:28.782 "qid": 0, 00:18:28.782 "state": "enabled", 00:18:28.782 "thread": "nvmf_tgt_poll_group_000", 00:18:28.782 "listen_address": { 00:18:28.782 "trtype": "RDMA", 00:18:28.782 "adrfam": "IPv4", 00:18:28.782 "traddr": "192.168.100.8", 00:18:28.782 "trsvcid": "4420" 00:18:28.782 }, 00:18:28.782 "peer_address": { 00:18:28.782 "trtype": "RDMA", 00:18:28.782 "adrfam": "IPv4", 00:18:28.782 "traddr": "192.168.100.8", 00:18:28.782 "trsvcid": "42963" 00:18:28.782 }, 00:18:28.782 "auth": { 00:18:28.782 "state": "completed", 00:18:28.782 "digest": "sha512", 00:18:28.782 "dhgroup": "ffdhe2048" 00:18:28.782 } 00:18:28.782 } 00:18:28.782 ]' 00:18:28.783 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.783 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.783 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.041 07:24:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.978 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.979 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.238 00:18:30.238 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.238 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.238 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.497 { 00:18:30.497 "cntlid": 113, 00:18:30.497 "qid": 0, 00:18:30.497 "state": "enabled", 00:18:30.497 "thread": "nvmf_tgt_poll_group_000", 00:18:30.497 "listen_address": { 00:18:30.497 "trtype": "RDMA", 00:18:30.497 "adrfam": "IPv4", 00:18:30.497 "traddr": "192.168.100.8", 00:18:30.497 "trsvcid": "4420" 00:18:30.497 }, 00:18:30.497 "peer_address": { 00:18:30.497 "trtype": "RDMA", 00:18:30.497 "adrfam": "IPv4", 00:18:30.497 "traddr": "192.168.100.8", 00:18:30.497 "trsvcid": "57842" 00:18:30.497 }, 00:18:30.497 "auth": { 00:18:30.497 "state": "completed", 00:18:30.497 "digest": "sha512", 00:18:30.497 "dhgroup": "ffdhe3072" 00:18:30.497 } 00:18:30.497 } 00:18:30.497 ]' 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.497 07:24:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.497 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.497 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.497 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.756 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:31.324 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.583 07:24:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.584 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.843 00:18:31.843 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.843 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.843 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.102 { 00:18:32.102 "cntlid": 115, 00:18:32.102 "qid": 0, 00:18:32.102 "state": "enabled", 00:18:32.102 "thread": "nvmf_tgt_poll_group_000", 00:18:32.102 "listen_address": { 00:18:32.102 "trtype": "RDMA", 00:18:32.102 "adrfam": "IPv4", 00:18:32.102 "traddr": "192.168.100.8", 00:18:32.102 "trsvcid": "4420" 00:18:32.102 }, 00:18:32.102 "peer_address": { 00:18:32.102 "trtype": "RDMA", 00:18:32.102 "adrfam": "IPv4", 00:18:32.102 "traddr": "192.168.100.8", 00:18:32.102 "trsvcid": "35623" 00:18:32.102 }, 00:18:32.102 "auth": { 00:18:32.102 "state": "completed", 00:18:32.102 "digest": "sha512", 00:18:32.102 "dhgroup": "ffdhe3072" 00:18:32.102 } 00:18:32.102 } 00:18:32.102 ]' 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.102 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.361 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.361 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.361 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.361 07:24:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:32.928 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.187 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.524 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.524 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.525 07:24:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.784 { 00:18:33.784 "cntlid": 117, 00:18:33.784 "qid": 0, 00:18:33.784 "state": "enabled", 00:18:33.784 "thread": "nvmf_tgt_poll_group_000", 00:18:33.784 "listen_address": { 00:18:33.784 "trtype": "RDMA", 00:18:33.784 "adrfam": "IPv4", 00:18:33.784 "traddr": "192.168.100.8", 00:18:33.784 "trsvcid": "4420" 00:18:33.784 }, 00:18:33.784 "peer_address": { 00:18:33.784 "trtype": "RDMA", 00:18:33.784 "adrfam": "IPv4", 00:18:33.784 "traddr": "192.168.100.8", 00:18:33.784 "trsvcid": "49094" 00:18:33.784 }, 00:18:33.784 "auth": { 00:18:33.784 "state": "completed", 00:18:33.784 "digest": "sha512", 00:18:33.784 "dhgroup": "ffdhe3072" 00:18:33.784 } 00:18:33.784 } 00:18:33.784 ]' 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.784 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.043 07:24:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:34.612 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.871 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.130 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.389 { 00:18:35.389 "cntlid": 119, 00:18:35.389 "qid": 0, 00:18:35.389 "state": "enabled", 00:18:35.389 "thread": "nvmf_tgt_poll_group_000", 00:18:35.389 "listen_address": { 00:18:35.389 "trtype": "RDMA", 00:18:35.389 "adrfam": "IPv4", 00:18:35.389 "traddr": "192.168.100.8", 00:18:35.389 "trsvcid": "4420" 00:18:35.389 }, 00:18:35.389 "peer_address": { 00:18:35.389 "trtype": "RDMA", 00:18:35.389 "adrfam": "IPv4", 00:18:35.389 "traddr": "192.168.100.8", 00:18:35.389 "trsvcid": "60755" 00:18:35.389 }, 00:18:35.389 "auth": { 00:18:35.389 "state": "completed", 00:18:35.389 "digest": "sha512", 00:18:35.389 "dhgroup": "ffdhe3072" 00:18:35.389 } 00:18:35.389 } 00:18:35.389 ]' 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.389 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.648 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.648 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.648 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.648 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.648 07:24:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.648 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:36.217 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.475 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:36.475 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.475 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.476 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.476 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.476 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.476 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.476 07:24:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.734 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.993 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.993 { 00:18:36.993 "cntlid": 121, 00:18:36.993 "qid": 0, 00:18:36.993 "state": "enabled", 00:18:36.993 "thread": "nvmf_tgt_poll_group_000", 00:18:36.993 "listen_address": { 00:18:36.993 "trtype": "RDMA", 00:18:36.993 "adrfam": "IPv4", 00:18:36.993 "traddr": "192.168.100.8", 00:18:36.993 "trsvcid": "4420" 00:18:36.993 }, 00:18:36.993 "peer_address": { 00:18:36.993 "trtype": "RDMA", 00:18:36.993 "adrfam": "IPv4", 00:18:36.993 "traddr": "192.168.100.8", 00:18:36.993 "trsvcid": "58383" 00:18:36.993 }, 00:18:36.993 "auth": { 00:18:36.993 "state": "completed", 00:18:36.993 "digest": "sha512", 00:18:36.993 "dhgroup": "ffdhe4096" 00:18:36.993 } 00:18:36.993 } 00:18:36.993 ]' 00:18:36.993 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.252 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.510 07:24:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.077 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.336 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.336 07:24:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.594 00:18:38.594 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.594 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.594 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.853 { 00:18:38.853 "cntlid": 123, 00:18:38.853 "qid": 0, 00:18:38.853 "state": "enabled", 00:18:38.853 "thread": "nvmf_tgt_poll_group_000", 00:18:38.853 "listen_address": { 00:18:38.853 "trtype": "RDMA", 00:18:38.853 "adrfam": "IPv4", 00:18:38.853 "traddr": "192.168.100.8", 00:18:38.853 "trsvcid": "4420" 00:18:38.853 }, 00:18:38.853 "peer_address": { 00:18:38.853 "trtype": "RDMA", 00:18:38.853 "adrfam": "IPv4", 00:18:38.853 "traddr": "192.168.100.8", 00:18:38.853 "trsvcid": "52693" 00:18:38.853 }, 00:18:38.853 "auth": { 00:18:38.853 "state": "completed", 00:18:38.853 "digest": "sha512", 00:18:38.853 "dhgroup": "ffdhe4096" 00:18:38.853 } 00:18:38.853 } 00:18:38.853 ]' 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.853 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.111 07:24:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:39.678 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.678 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:39.678 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.678 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.937 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.196 00:18:40.196 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.196 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.196 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.455 { 00:18:40.455 "cntlid": 125, 00:18:40.455 "qid": 0, 00:18:40.455 "state": "enabled", 00:18:40.455 "thread": "nvmf_tgt_poll_group_000", 00:18:40.455 "listen_address": { 00:18:40.455 "trtype": "RDMA", 00:18:40.455 "adrfam": "IPv4", 00:18:40.455 "traddr": "192.168.100.8", 00:18:40.455 "trsvcid": "4420" 00:18:40.455 }, 00:18:40.455 "peer_address": { 00:18:40.455 "trtype": "RDMA", 00:18:40.455 "adrfam": "IPv4", 00:18:40.455 "traddr": "192.168.100.8", 00:18:40.455 "trsvcid": "52626" 00:18:40.455 }, 00:18:40.455 "auth": { 00:18:40.455 "state": "completed", 00:18:40.455 "digest": "sha512", 00:18:40.455 "dhgroup": "ffdhe4096" 00:18:40.455 } 00:18:40.455 } 00:18:40.455 ]' 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.455 07:24:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.714 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:41.280 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.539 07:24:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.797 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.798 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.056 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.056 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.057 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.057 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.057 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.057 { 00:18:42.057 "cntlid": 127, 00:18:42.057 "qid": 0, 00:18:42.057 "state": "enabled", 00:18:42.057 "thread": "nvmf_tgt_poll_group_000", 00:18:42.057 "listen_address": { 00:18:42.057 "trtype": "RDMA", 00:18:42.057 "adrfam": "IPv4", 00:18:42.057 "traddr": "192.168.100.8", 00:18:42.057 "trsvcid": "4420" 00:18:42.057 }, 00:18:42.057 "peer_address": { 00:18:42.057 "trtype": "RDMA", 00:18:42.057 "adrfam": "IPv4", 00:18:42.057 "traddr": "192.168.100.8", 00:18:42.057 "trsvcid": "32879" 00:18:42.057 }, 00:18:42.057 "auth": { 00:18:42.057 "state": "completed", 00:18:42.057 "digest": "sha512", 00:18:42.057 "dhgroup": "ffdhe4096" 00:18:42.057 } 00:18:42.057 } 00:18:42.057 ]' 00:18:42.057 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.315 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.315 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.316 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.316 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.316 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.316 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.316 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.574 07:24:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.142 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.400 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.401 07:24:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.659 00:18:43.659 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.659 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.659 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.918 { 00:18:43.918 "cntlid": 129, 00:18:43.918 "qid": 0, 00:18:43.918 "state": "enabled", 00:18:43.918 "thread": "nvmf_tgt_poll_group_000", 00:18:43.918 "listen_address": { 00:18:43.918 "trtype": "RDMA", 00:18:43.918 "adrfam": "IPv4", 00:18:43.918 "traddr": "192.168.100.8", 00:18:43.918 "trsvcid": "4420" 00:18:43.918 }, 00:18:43.918 "peer_address": { 00:18:43.918 "trtype": "RDMA", 00:18:43.918 "adrfam": "IPv4", 00:18:43.918 "traddr": "192.168.100.8", 00:18:43.918 "trsvcid": "54885" 00:18:43.918 }, 00:18:43.918 "auth": { 00:18:43.918 "state": "completed", 00:18:43.918 "digest": "sha512", 00:18:43.918 "dhgroup": "ffdhe6144" 00:18:43.918 } 00:18:43.918 } 00:18:43.918 ]' 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.918 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.177 07:24:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:44.744 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.004 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.572 00:18:45.572 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.572 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.572 07:24:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.572 { 00:18:45.572 "cntlid": 131, 00:18:45.572 "qid": 0, 00:18:45.572 "state": "enabled", 00:18:45.572 "thread": "nvmf_tgt_poll_group_000", 00:18:45.572 "listen_address": { 00:18:45.572 "trtype": "RDMA", 00:18:45.572 "adrfam": "IPv4", 00:18:45.572 "traddr": "192.168.100.8", 00:18:45.572 "trsvcid": "4420" 00:18:45.572 }, 00:18:45.572 "peer_address": { 00:18:45.572 "trtype": "RDMA", 00:18:45.572 "adrfam": "IPv4", 00:18:45.572 "traddr": "192.168.100.8", 00:18:45.572 "trsvcid": "42633" 00:18:45.572 }, 00:18:45.572 "auth": { 00:18:45.572 "state": "completed", 00:18:45.572 "digest": "sha512", 00:18:45.572 "dhgroup": "ffdhe6144" 00:18:45.572 } 00:18:45.572 } 00:18:45.572 ]' 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.572 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.831 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.831 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.831 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.831 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.831 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:46.443 07:24:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.716 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.285 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.285 { 00:18:47.285 "cntlid": 133, 00:18:47.285 "qid": 0, 00:18:47.285 "state": "enabled", 00:18:47.285 "thread": "nvmf_tgt_poll_group_000", 00:18:47.285 "listen_address": { 00:18:47.285 "trtype": "RDMA", 00:18:47.285 "adrfam": "IPv4", 00:18:47.285 "traddr": "192.168.100.8", 00:18:47.285 "trsvcid": "4420" 00:18:47.285 }, 00:18:47.285 "peer_address": { 00:18:47.285 "trtype": "RDMA", 00:18:47.285 "adrfam": "IPv4", 00:18:47.285 "traddr": "192.168.100.8", 00:18:47.285 "trsvcid": "53192" 00:18:47.285 }, 00:18:47.285 "auth": { 00:18:47.285 "state": "completed", 00:18:47.285 "digest": "sha512", 00:18:47.285 "dhgroup": "ffdhe6144" 00:18:47.285 } 00:18:47.285 } 00:18:47.285 ]' 00:18:47.285 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.544 07:24:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.804 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.372 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.632 07:24:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.890 00:18:48.890 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.890 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.890 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.150 { 00:18:49.150 "cntlid": 135, 00:18:49.150 "qid": 0, 00:18:49.150 "state": "enabled", 00:18:49.150 "thread": "nvmf_tgt_poll_group_000", 00:18:49.150 "listen_address": { 00:18:49.150 "trtype": "RDMA", 00:18:49.150 "adrfam": "IPv4", 00:18:49.150 "traddr": "192.168.100.8", 00:18:49.150 "trsvcid": "4420" 00:18:49.150 }, 00:18:49.150 "peer_address": { 00:18:49.150 "trtype": "RDMA", 00:18:49.150 "adrfam": "IPv4", 00:18:49.150 "traddr": "192.168.100.8", 00:18:49.150 "trsvcid": "52932" 00:18:49.150 }, 00:18:49.150 "auth": { 00:18:49.150 "state": "completed", 00:18:49.150 "digest": "sha512", 00:18:49.150 "dhgroup": "ffdhe6144" 00:18:49.150 } 00:18:49.150 } 00:18:49.150 ]' 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.150 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.409 07:24:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:49.977 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.237 07:24:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.805 00:18:50.805 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.805 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.805 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.064 { 00:18:51.064 "cntlid": 137, 00:18:51.064 "qid": 0, 00:18:51.064 "state": "enabled", 00:18:51.064 "thread": "nvmf_tgt_poll_group_000", 00:18:51.064 "listen_address": { 00:18:51.064 "trtype": "RDMA", 00:18:51.064 "adrfam": "IPv4", 00:18:51.064 "traddr": "192.168.100.8", 00:18:51.064 "trsvcid": "4420" 00:18:51.064 }, 00:18:51.064 "peer_address": { 00:18:51.064 "trtype": "RDMA", 00:18:51.064 "adrfam": "IPv4", 00:18:51.064 "traddr": "192.168.100.8", 00:18:51.064 "trsvcid": "50328" 00:18:51.064 }, 00:18:51.064 "auth": { 00:18:51.064 "state": "completed", 00:18:51.064 "digest": "sha512", 00:18:51.064 "dhgroup": "ffdhe8192" 00:18:51.064 } 00:18:51.064 } 00:18:51.064 ]' 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.064 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.323 07:24:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:51.891 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.891 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:51.891 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.150 07:24:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.719 00:18:52.719 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.719 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.719 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.978 { 00:18:52.978 "cntlid": 139, 00:18:52.978 "qid": 0, 00:18:52.978 "state": "enabled", 00:18:52.978 "thread": "nvmf_tgt_poll_group_000", 00:18:52.978 "listen_address": { 00:18:52.978 "trtype": "RDMA", 00:18:52.978 "adrfam": "IPv4", 00:18:52.978 "traddr": "192.168.100.8", 00:18:52.978 "trsvcid": "4420" 00:18:52.978 }, 00:18:52.978 "peer_address": { 00:18:52.978 "trtype": "RDMA", 00:18:52.978 "adrfam": "IPv4", 00:18:52.978 "traddr": "192.168.100.8", 00:18:52.978 "trsvcid": "40054" 00:18:52.978 }, 00:18:52.978 "auth": { 00:18:52.978 "state": "completed", 00:18:52.978 "digest": "sha512", 00:18:52.978 "dhgroup": "ffdhe8192" 00:18:52.978 } 00:18:52.978 } 00:18:52.978 ]' 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.978 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.237 07:24:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NjI0ZTlmYTdhMDZiNDRhYzZhMjBiNGVjNjMzMzAwMDnBd+9A: --dhchap-ctrl-secret DHHC-1:02:NjVhYjU1YmRhMGE2MDgwNGQ0NTk1MjU4ODRlMDgzZDM2NDM5N2NhN2VkMjFmNTZkEBoPzQ==: 00:18:53.805 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.806 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.064 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.632 00:18:54.632 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.632 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.632 07:24:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.632 { 00:18:54.632 "cntlid": 141, 00:18:54.632 "qid": 0, 00:18:54.632 "state": "enabled", 00:18:54.632 "thread": "nvmf_tgt_poll_group_000", 00:18:54.632 "listen_address": { 00:18:54.632 "trtype": "RDMA", 00:18:54.632 "adrfam": "IPv4", 00:18:54.632 "traddr": "192.168.100.8", 00:18:54.632 "trsvcid": "4420" 00:18:54.632 }, 00:18:54.632 "peer_address": { 00:18:54.632 "trtype": "RDMA", 00:18:54.632 "adrfam": "IPv4", 00:18:54.632 "traddr": "192.168.100.8", 00:18:54.632 "trsvcid": "51951" 00:18:54.632 }, 00:18:54.632 "auth": { 00:18:54.632 "state": "completed", 00:18:54.632 "digest": "sha512", 00:18:54.632 "dhgroup": "ffdhe8192" 00:18:54.632 } 00:18:54.632 } 00:18:54.632 ]' 00:18:54.632 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.891 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.891 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.891 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.891 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.891 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.892 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.892 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.150 07:24:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmNkNjkxYTY0ZjAxYzdjNDU1ZjEzNTE4OTVmYzM0YWE0ZTJhNDQ4ZDkyZDZhN2NjacO4pw==: --dhchap-ctrl-secret DHHC-1:01:MjUyOTgzZjRmYjY1MDdkOTg5MmIwNWIxZTRmMjRjMjGuxt78: 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.716 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.975 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.542 00:18:56.542 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.542 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.542 07:24:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.542 { 00:18:56.542 "cntlid": 143, 00:18:56.542 "qid": 0, 00:18:56.542 "state": "enabled", 00:18:56.542 "thread": "nvmf_tgt_poll_group_000", 00:18:56.542 "listen_address": { 00:18:56.542 "trtype": "RDMA", 00:18:56.542 "adrfam": "IPv4", 00:18:56.542 "traddr": "192.168.100.8", 00:18:56.542 "trsvcid": "4420" 00:18:56.542 }, 00:18:56.542 "peer_address": { 00:18:56.542 "trtype": "RDMA", 00:18:56.542 "adrfam": "IPv4", 00:18:56.542 "traddr": "192.168.100.8", 00:18:56.542 "trsvcid": "48273" 00:18:56.542 }, 00:18:56.542 "auth": { 00:18:56.542 "state": "completed", 00:18:56.542 "digest": "sha512", 00:18:56.542 "dhgroup": "ffdhe8192" 00:18:56.542 } 00:18:56.542 } 00:18:56.542 ]' 00:18:56.542 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.800 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.800 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.800 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.800 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.800 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.801 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.801 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.060 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:18:57.629 07:24:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.629 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.888 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.456 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.456 { 00:18:58.456 "cntlid": 145, 00:18:58.456 "qid": 0, 00:18:58.456 "state": "enabled", 00:18:58.456 "thread": "nvmf_tgt_poll_group_000", 00:18:58.456 "listen_address": { 00:18:58.456 "trtype": "RDMA", 00:18:58.456 "adrfam": "IPv4", 00:18:58.456 "traddr": "192.168.100.8", 00:18:58.456 "trsvcid": "4420" 00:18:58.456 }, 00:18:58.456 "peer_address": { 00:18:58.456 "trtype": "RDMA", 00:18:58.456 "adrfam": "IPv4", 00:18:58.456 "traddr": "192.168.100.8", 00:18:58.456 "trsvcid": "32802" 00:18:58.456 }, 00:18:58.456 "auth": { 00:18:58.456 "state": "completed", 00:18:58.456 "digest": "sha512", 00:18:58.456 "dhgroup": "ffdhe8192" 00:18:58.456 } 00:18:58.456 } 00:18:58.456 ]' 00:18:58.456 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.715 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.715 07:24:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.715 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.715 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.715 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.715 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.715 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.974 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MmRkMzg3OWZkZTczOTRhOWE2NzQxM2ZlZGQyNzJhZjFiZmNlOTQwZTgwZWM0MDg3iHIFlw==: --dhchap-ctrl-secret DHHC-1:03:MjEyNDVkOGVhNmU1ZWU4YTBkOGY3MDA5MzMzZmZjNjc5ZTdlMzY5NmFmYmVhOTBkZTQ4OTk3MDllOWM1MTdkZXiVfjg=: 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.542 07:24:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:31.655 request: 00:19:31.655 { 00:19:31.655 "name": "nvme0", 00:19:31.655 "trtype": "rdma", 00:19:31.655 "traddr": "192.168.100.8", 00:19:31.655 "adrfam": "ipv4", 00:19:31.655 "trsvcid": "4420", 00:19:31.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:31.655 "prchk_reftag": false, 00:19:31.655 "prchk_guard": false, 00:19:31.655 "hdgst": false, 00:19:31.655 "ddgst": false, 00:19:31.655 "dhchap_key": "key2", 00:19:31.655 "method": "bdev_nvme_attach_controller", 00:19:31.655 "req_id": 1 00:19:31.655 } 00:19:31.655 Got JSON-RPC error response 00:19:31.655 response: 00:19:31.655 { 00:19:31.655 "code": -5, 00:19:31.655 "message": "Input/output error" 00:19:31.655 } 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.655 request: 00:19:31.655 { 00:19:31.655 "name": "nvme0", 00:19:31.655 "trtype": "rdma", 00:19:31.655 "traddr": "192.168.100.8", 00:19:31.655 "adrfam": "ipv4", 00:19:31.655 "trsvcid": "4420", 00:19:31.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:31.655 "prchk_reftag": false, 00:19:31.655 "prchk_guard": false, 00:19:31.655 "hdgst": false, 00:19:31.655 "ddgst": false, 00:19:31.655 "dhchap_key": "key1", 00:19:31.655 "dhchap_ctrlr_key": "ckey2", 00:19:31.655 "method": "bdev_nvme_attach_controller", 00:19:31.655 "req_id": 1 00:19:31.655 } 00:19:31.655 Got JSON-RPC error response 00:19:31.655 response: 00:19:31.655 { 00:19:31.655 "code": -5, 00:19:31.655 "message": "Input/output error" 00:19:31.655 } 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.655 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.656 07:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.723 request: 00:20:03.723 { 00:20:03.723 "name": "nvme0", 00:20:03.723 "trtype": "rdma", 00:20:03.723 "traddr": "192.168.100.8", 00:20:03.723 "adrfam": "ipv4", 00:20:03.723 "trsvcid": "4420", 00:20:03.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:03.723 "prchk_reftag": false, 00:20:03.723 "prchk_guard": false, 00:20:03.723 "hdgst": false, 00:20:03.723 "ddgst": false, 00:20:03.723 "dhchap_key": "key1", 00:20:03.723 "dhchap_ctrlr_key": "ckey1", 00:20:03.723 "method": "bdev_nvme_attach_controller", 00:20:03.723 "req_id": 1 00:20:03.723 } 00:20:03.723 Got JSON-RPC error response 00:20:03.723 response: 00:20:03.723 { 00:20:03.723 "code": -5, 00:20:03.723 "message": "Input/output error" 00:20:03.723 } 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2684487 ']' 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2684487' 00:20:03.723 killing process with pid 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2684487 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2718219 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2718219 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2718219 ']' 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.723 07:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.723 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.723 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:03.723 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.723 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.723 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2718219 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2718219 ']' 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.724 07:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.724 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.724 { 00:20:03.724 "cntlid": 1, 00:20:03.724 "qid": 0, 00:20:03.724 "state": "enabled", 00:20:03.724 "thread": "nvmf_tgt_poll_group_000", 00:20:03.724 "listen_address": { 00:20:03.724 "trtype": "RDMA", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "192.168.100.8", 00:20:03.724 "trsvcid": "4420" 00:20:03.724 }, 00:20:03.724 "peer_address": { 00:20:03.724 "trtype": "RDMA", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "192.168.100.8", 00:20:03.724 "trsvcid": "52640" 00:20:03.724 }, 00:20:03.724 "auth": { 00:20:03.724 "state": "completed", 00:20:03.724 "digest": "sha512", 00:20:03.724 "dhgroup": "ffdhe8192" 00:20:03.724 } 00:20:03.724 } 00:20:03.724 ]' 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.724 07:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.724 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjE3NDUyMmEwY2Q0MjI5NzRkZjNmNmI4Y2I0NjdlYjYzZDg0NTY3YjI0M2VjZDg0YTIwNjVkYjE5NzcyYjU1Nq+oWKg=: 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:04.291 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.550 07:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.688 request: 00:20:36.688 { 00:20:36.688 "name": "nvme0", 00:20:36.688 "trtype": "rdma", 00:20:36.688 "traddr": "192.168.100.8", 00:20:36.688 "adrfam": "ipv4", 00:20:36.688 "trsvcid": "4420", 00:20:36.688 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:36.688 "prchk_reftag": false, 00:20:36.688 "prchk_guard": false, 00:20:36.688 "hdgst": false, 00:20:36.688 "ddgst": false, 00:20:36.688 "dhchap_key": "key3", 00:20:36.688 "method": "bdev_nvme_attach_controller", 00:20:36.688 "req_id": 1 00:20:36.688 } 00:20:36.688 Got JSON-RPC error response 00:20:36.688 response: 00:20:36.688 { 00:20:36.688 "code": -5, 00:20:36.688 "message": "Input/output error" 00:20:36.688 } 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.688 07:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.762 request: 00:21:08.762 { 00:21:08.762 "name": "nvme0", 00:21:08.762 "trtype": "rdma", 00:21:08.762 "traddr": "192.168.100.8", 00:21:08.762 "adrfam": "ipv4", 00:21:08.762 "trsvcid": "4420", 00:21:08.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:08.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.762 "prchk_reftag": false, 00:21:08.762 "prchk_guard": false, 00:21:08.762 "hdgst": false, 00:21:08.762 "ddgst": false, 00:21:08.762 "dhchap_key": "key3", 00:21:08.762 "method": "bdev_nvme_attach_controller", 00:21:08.762 "req_id": 1 00:21:08.762 } 00:21:08.762 Got JSON-RPC error response 00:21:08.762 response: 00:21:08.762 { 00:21:08.762 "code": -5, 00:21:08.762 "message": "Input/output error" 00:21:08.762 } 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.762 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:08.763 07:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:08.763 request: 00:21:08.763 { 00:21:08.763 "name": "nvme0", 00:21:08.763 "trtype": "rdma", 00:21:08.763 "traddr": "192.168.100.8", 00:21:08.763 "adrfam": "ipv4", 00:21:08.763 "trsvcid": "4420", 00:21:08.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:08.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.763 "prchk_reftag": false, 00:21:08.763 "prchk_guard": false, 00:21:08.763 "hdgst": false, 00:21:08.763 "ddgst": false, 00:21:08.763 "dhchap_key": "key0", 00:21:08.763 "dhchap_ctrlr_key": "key1", 00:21:08.763 "method": "bdev_nvme_attach_controller", 00:21:08.763 "req_id": 1 00:21:08.763 } 00:21:08.763 Got JSON-RPC error response 00:21:08.763 response: 00:21:08.763 { 00:21:08.763 "code": -5, 00:21:08.763 "message": "Input/output error" 00:21:08.763 } 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:08.763 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2684646 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2684646 ']' 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2684646 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2684646 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2684646' 00:21:08.763 killing process with pid 2684646 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2684646 00:21:08.763 07:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2684646 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:08.763 rmmod nvme_rdma 00:21:08.763 rmmod nvme_fabrics 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2718219 ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2718219 ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718219' 00:21:08.763 killing process with pid 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2718219 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.J7a /tmp/spdk.key-sha256.6uS /tmp/spdk.key-sha384.Z9g /tmp/spdk.key-sha512.mby /tmp/spdk.key-sha512.FoM /tmp/spdk.key-sha384.0t3 /tmp/spdk.key-sha256.3Tf '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:08.763 00:21:08.763 real 4m24.307s 00:21:08.763 user 9m23.710s 00:21:08.763 sys 0m24.880s 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.763 ************************************ 00:21:08.763 END TEST nvmf_auth_target 00:21:08.763 ************************************ 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.763 ************************************ 00:21:08.763 START TEST nvmf_srq_overwhelm 00:21:08.763 ************************************ 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:08.763 * Looking for test storage... 00:21:08.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.763 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.764 07:26:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:15.329 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:15.329 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:15.329 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:15.329 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:15.329 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:15.330 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.330 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:15.330 altname enp217s0f0np0 00:21:15.330 altname ens818f0np0 00:21:15.330 inet 192.168.100.8/24 scope global mlx_0_0 00:21:15.330 valid_lft forever preferred_lft forever 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:15.330 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.330 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:15.330 altname enp217s0f1np1 00:21:15.330 altname ens818f1np1 00:21:15.330 inet 192.168.100.9/24 scope global mlx_0_1 00:21:15.330 valid_lft forever preferred_lft forever 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:15.330 192.168.100.9' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:15.330 192.168.100.9' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:15.330 192.168.100.9' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.330 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2733317 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2733317 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 2733317 ']' 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.331 07:26:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:15.589 [2024-07-25 07:26:47.891455] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:15.589 [2024-07-25 07:26:47.891503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.589 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.589 [2024-07-25 07:26:47.973522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.590 [2024-07-25 07:26:48.048744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.590 [2024-07-25 07:26:48.048782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.590 [2024-07-25 07:26:48.048803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.590 [2024-07-25 07:26:48.048811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.590 [2024-07-25 07:26:48.048818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.590 [2024-07-25 07:26:48.048867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.590 [2024-07-25 07:26:48.048964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.590 [2024-07-25 07:26:48.049037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.590 [2024-07-25 07:26:48.049039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 [2024-07-25 07:26:48.789664] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7dadd0/0x7df2c0) succeed. 00:21:16.525 [2024-07-25 07:26:48.799023] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7dc410/0x820950) succeed. 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 Malloc0 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 [2024-07-25 07:26:48.897727] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.526 07:26:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.462 Malloc1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.462 07:26:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:18.396 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:21:18.396 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:18.396 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:18.396 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:18.654 Malloc2 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.654 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:18.655 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.655 07:26:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.591 07:26:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:19.591 Malloc3 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.591 07:26:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:20.610 Malloc4 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.610 07:26:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:21.545 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.803 Malloc5 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:21.803 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.804 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.804 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.804 07:26:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:22.739 07:26:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:21:22.739 [global] 00:21:22.739 thread=1 00:21:22.739 invalidate=1 00:21:22.739 rw=read 00:21:22.739 time_based=1 00:21:22.739 runtime=10 00:21:22.739 ioengine=libaio 00:21:22.739 direct=1 00:21:22.739 bs=1048576 00:21:22.739 iodepth=128 00:21:22.739 norandommap=1 00:21:22.739 numjobs=13 00:21:22.739 00:21:22.739 [job0] 00:21:22.739 filename=/dev/nvme0n1 00:21:22.739 [job1] 00:21:22.739 filename=/dev/nvme1n1 00:21:22.739 [job2] 00:21:22.739 filename=/dev/nvme2n1 00:21:22.739 [job3] 00:21:22.739 filename=/dev/nvme3n1 00:21:22.739 [job4] 00:21:22.739 filename=/dev/nvme4n1 00:21:22.739 [job5] 00:21:22.739 filename=/dev/nvme5n1 00:21:23.015 Could not set queue depth (nvme0n1) 00:21:23.015 Could not set queue depth (nvme1n1) 00:21:23.015 Could not set queue depth (nvme2n1) 00:21:23.015 Could not set queue depth (nvme3n1) 00:21:23.015 Could not set queue depth (nvme4n1) 00:21:23.015 Could not set queue depth (nvme5n1) 00:21:23.275 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:23.275 ... 00:21:23.275 fio-3.35 00:21:23.275 Starting 78 threads 00:21:35.483 00:21:35.483 job0: (groupid=0, jobs=1): err= 0: pid=2734918: Thu Jul 25 07:27:06 2024 00:21:35.483 read: IOPS=4, BW=4501KiB/s (4609kB/s)(45.0MiB/10238msec) 00:21:35.483 slat (usec): min=929, max=2098.7k, avg=226510.58, stdev=628148.52 00:21:35.483 clat (msec): min=44, max=10221, avg=7199.53, stdev=3098.91 00:21:35.483 lat (msec): min=2094, max=10237, avg=7426.04, stdev=2932.08 00:21:35.483 clat percentiles (msec): 00:21:35.483 | 1.00th=[ 45], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4245], 00:21:35.483 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[ 8557], 00:21:35.483 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:21:35.483 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.483 | 99.99th=[10268] 00:21:35.483 lat (msec) : 50=2.22%, >=2000=97.78% 00:21:35.483 cpu : usr=0.01%, sys=0.46%, ctx=73, majf=0, minf=11521 00:21:35.483 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:21:35.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.483 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.483 job0: (groupid=0, jobs=1): err= 0: pid=2734919: Thu Jul 25 07:27:06 2024 00:21:35.483 read: IOPS=5, BW=5564KiB/s (5698kB/s)(56.0MiB/10306msec) 00:21:35.483 slat (usec): min=596, max=4257.6k, avg=183266.47, stdev=703619.49 00:21:35.483 clat (msec): min=41, max=10302, avg=9283.40, stdev=2528.81 00:21:35.483 lat (msec): min=2101, max=10304, avg=9466.66, stdev=2197.01 00:21:35.483 clat percentiles (msec): 00:21:35.483 | 1.00th=[ 42], 5.00th=[ 2106], 10.00th=[ 6409], 20.00th=[10134], 00:21:35.483 | 30.00th=[10268], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:21:35.483 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:35.483 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.483 | 99.99th=[10268] 00:21:35.483 lat (msec) : 50=1.79%, >=2000=98.21% 00:21:35.483 cpu : usr=0.00%, sys=0.50%, ctx=90, majf=0, minf=14337 00:21:35.483 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:21:35.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.483 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.483 job0: (groupid=0, jobs=1): err= 0: pid=2734920: Thu Jul 25 07:27:06 2024 00:21:35.483 read: IOPS=88, BW=88.3MiB/s (92.6MB/s)(902MiB/10216msec) 00:21:35.483 slat (usec): min=68, max=2208.8k, avg=11281.48, stdev=100830.02 00:21:35.483 clat (msec): min=37, max=4821, avg=1352.35, stdev=1313.29 00:21:35.483 lat (msec): min=504, max=4825, avg=1363.64, stdev=1315.40 00:21:35.483 clat percentiles (msec): 00:21:35.483 | 1.00th=[ 510], 5.00th=[ 584], 10.00th=[ 617], 20.00th=[ 642], 00:21:35.483 | 30.00th=[ 709], 40.00th=[ 735], 50.00th=[ 793], 60.00th=[ 1003], 00:21:35.483 | 70.00th=[ 1045], 80.00th=[ 1133], 90.00th=[ 4463], 95.00th=[ 4597], 00:21:35.483 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:21:35.483 | 99.99th=[ 4799] 00:21:35.483 bw ( KiB/s): min= 2048, max=217088, per=3.52%, avg=144104.73, stdev=64583.63, samples=11 00:21:35.483 iops : min= 2, max= 212, avg=140.73, stdev=63.07, samples=11 00:21:35.483 lat (msec) : 50=0.11%, 750=42.46%, 1000=17.74%, 2000=25.50%, >=2000=14.19% 00:21:35.483 cpu : usr=0.00%, sys=1.55%, ctx=1715, majf=0, minf=32769 00:21:35.483 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:21:35.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.483 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.483 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.483 job0: (groupid=0, jobs=1): err= 0: pid=2734921: Thu Jul 25 07:27:06 2024 00:21:35.483 read: IOPS=6, BW=7146KiB/s (7318kB/s)(72.0MiB/10317msec) 00:21:35.483 slat (usec): min=852, max=4286.3k, avg=142742.06, stdev=620384.83 00:21:35.483 clat (msec): min=38, max=10314, avg=9380.54, stdev=1753.80 00:21:35.483 lat (msec): min=2135, max=10315, avg=9523.29, stdev=1355.81 00:21:35.483 clat percentiles (msec): 00:21:35.483 | 1.00th=[ 39], 5.00th=[ 6409], 10.00th=[ 8423], 20.00th=[ 8490], 00:21:35.483 | 30.00th=[ 8557], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:21:35.483 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:35.483 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.483 | 99.99th=[10268] 00:21:35.483 lat (msec) : 50=1.39%, >=2000=98.61% 00:21:35.483 cpu : usr=0.00%, sys=0.65%, ctx=131, majf=0, minf=18433 00:21:35.484 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:35.484 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734922: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=20, BW=20.3MiB/s (21.2MB/s)(207MiB/10221msec) 00:21:35.484 slat (usec): min=63, max=2162.3k, avg=49162.67, stdev=281731.24 00:21:35.484 clat (msec): min=43, max=10144, avg=6009.12, stdev=3635.21 00:21:35.484 lat (msec): min=790, max=10172, avg=6058.29, stdev=3617.54 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 793], 5.00th=[ 844], 10.00th=[ 852], 20.00th=[ 860], 00:21:35.484 | 30.00th=[ 3473], 40.00th=[ 5470], 50.00th=[ 6409], 60.00th=[ 9060], 00:21:35.484 | 70.00th=[ 9329], 80.00th=[ 9463], 90.00th=[ 9463], 95.00th=[ 9597], 00:21:35.484 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.484 | 99.99th=[10134] 00:21:35.484 bw ( KiB/s): min= 4096, max=65667, per=0.56%, avg=23131.86, stdev=23171.71, samples=7 00:21:35.484 iops : min= 4, max= 64, avg=22.57, stdev=22.59, samples=7 00:21:35.484 lat (msec) : 50=0.48%, 1000=24.15%, 2000=0.97%, >=2000=74.40% 00:21:35.484 cpu : usr=0.00%, sys=0.80%, ctx=244, majf=0, minf=32769 00:21:35.484 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.7%, 32=15.5%, >=64=69.6% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:21:35.484 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734923: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=57, BW=57.9MiB/s (60.7MB/s)(597MiB/10305msec) 00:21:35.484 slat (usec): min=75, max=2195.2k, avg=17180.31, stdev=128639.70 00:21:35.484 clat (msec): min=44, max=4905, avg=2109.14, stdev=1393.72 00:21:35.484 lat (msec): min=604, max=4909, avg=2126.32, stdev=1391.94 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 609], 5.00th=[ 642], 10.00th=[ 709], 20.00th=[ 827], 00:21:35.484 | 30.00th=[ 1062], 40.00th=[ 1485], 50.00th=[ 1770], 60.00th=[ 2123], 00:21:35.484 | 70.00th=[ 2232], 80.00th=[ 4329], 90.00th=[ 4597], 95.00th=[ 4732], 00:21:35.484 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:21:35.484 | 99.99th=[ 4933] 00:21:35.484 bw ( KiB/s): min= 6144, max=221184, per=2.34%, avg=96051.20, stdev=67968.72, samples=10 00:21:35.484 iops : min= 6, max= 216, avg=93.80, stdev=66.38, samples=10 00:21:35.484 lat (msec) : 50=0.17%, 750=12.90%, 1000=13.90%, 2000=29.98%, >=2000=43.05% 00:21:35.484 cpu : usr=0.02%, sys=1.67%, ctx=1273, majf=0, minf=32769 00:21:35.484 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.484 issued rwts: total=597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734924: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=42, BW=42.6MiB/s (44.7MB/s)(435MiB/10213msec) 00:21:35.484 slat (usec): min=594, max=2170.8k, avg=23323.76, stdev=174508.91 00:21:35.484 clat (msec): min=64, max=7394, avg=2737.97, stdev=2678.76 00:21:35.484 lat (msec): min=738, max=7399, avg=2761.29, stdev=2681.08 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 776], 20.00th=[ 835], 00:21:35.484 | 30.00th=[ 919], 40.00th=[ 936], 50.00th=[ 1150], 60.00th=[ 1368], 00:21:35.484 | 70.00th=[ 2140], 80.00th=[ 6745], 90.00th=[ 7080], 95.00th=[ 7282], 00:21:35.484 | 99.00th=[ 7349], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:21:35.484 | 99.99th=[ 7416] 00:21:35.484 bw ( KiB/s): min= 4096, max=176128, per=2.19%, avg=89819.43, stdev=69081.97, samples=7 00:21:35.484 iops : min= 4, max= 172, avg=87.71, stdev=67.46, samples=7 00:21:35.484 lat (msec) : 100=0.23%, 750=4.37%, 1000=40.69%, 2000=24.37%, >=2000=30.34% 00:21:35.484 cpu : usr=0.02%, sys=1.23%, ctx=999, majf=0, minf=32769 00:21:35.484 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.484 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734925: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(120MiB/10168msec) 00:21:35.484 slat (usec): min=577, max=3542.1k, avg=84373.81, stdev=419766.90 00:21:35.484 clat (msec): min=42, max=10108, avg=8181.37, stdev=1969.38 00:21:35.484 lat (msec): min=2108, max=10167, avg=8265.75, stdev=1829.67 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 2106], 5.00th=[ 5671], 10.00th=[ 5805], 20.00th=[ 5940], 00:21:35.484 | 30.00th=[ 6208], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9329], 00:21:35.484 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10000], 00:21:35.484 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.484 | 99.99th=[10134] 00:21:35.484 lat (msec) : 50=0.83%, >=2000=99.17% 00:21:35.484 cpu : usr=0.01%, sys=0.81%, ctx=472, majf=0, minf=30721 00:21:35.484 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:35.484 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734926: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=2, BW=2515KiB/s (2575kB/s)(25.0MiB/10179msec) 00:21:35.484 slat (msec): min=2, max=2133, avg=405.36, stdev=806.62 00:21:35.484 clat (msec): min=44, max=10079, avg=4969.12, stdev=3009.61 00:21:35.484 lat (msec): min=2094, max=10178, avg=5374.48, stdev=3001.17 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 45], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2123], 00:21:35.484 | 30.00th=[ 2140], 40.00th=[ 4245], 50.00th=[ 4279], 60.00th=[ 4279], 00:21:35.484 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10000], 95.00th=[10134], 00:21:35.484 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.484 | 99.99th=[10134] 00:21:35.484 lat (msec) : 50=4.00%, >=2000=96.00% 00:21:35.484 cpu : usr=0.00%, sys=0.18%, ctx=64, majf=0, minf=6401 00:21:35.484 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:35.484 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734927: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=117, BW=118MiB/s (124MB/s)(1189MiB/10092msec) 00:21:35.484 slat (usec): min=44, max=101337, avg=8424.91, stdev=15286.11 00:21:35.484 clat (msec): min=66, max=2178, avg=959.76, stdev=525.44 00:21:35.484 lat (msec): min=102, max=2182, avg=968.18, stdev=529.54 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 126], 5.00th=[ 409], 10.00th=[ 443], 20.00th=[ 472], 00:21:35.484 | 30.00th=[ 600], 40.00th=[ 709], 50.00th=[ 785], 60.00th=[ 877], 00:21:35.484 | 70.00th=[ 1036], 80.00th=[ 1636], 90.00th=[ 1838], 95.00th=[ 1871], 00:21:35.484 | 99.00th=[ 2089], 99.50th=[ 2123], 99.90th=[ 2165], 99.95th=[ 2165], 00:21:35.484 | 99.99th=[ 2165] 00:21:35.484 bw ( KiB/s): min=57344, max=299008, per=3.31%, avg=135688.06, stdev=75712.29, samples=16 00:21:35.484 iops : min= 56, max= 292, avg=132.50, stdev=73.94, samples=16 00:21:35.484 lat (msec) : 100=0.08%, 250=2.44%, 500=19.60%, 750=20.27%, 1000=25.90% 00:21:35.484 lat (msec) : 2000=29.69%, >=2000=2.02% 00:21:35.484 cpu : usr=0.04%, sys=1.89%, ctx=2114, majf=0, minf=32769 00:21:35.484 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.484 issued rwts: total=1189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734928: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=92, BW=92.2MiB/s (96.7MB/s)(943MiB/10229msec) 00:21:35.484 slat (usec): min=419, max=1572.5k, avg=10795.18, stdev=53319.27 00:21:35.484 clat (msec): min=43, max=2684, avg=1206.89, stdev=549.38 00:21:35.484 lat (msec): min=581, max=2690, avg=1217.69, stdev=549.50 00:21:35.484 clat percentiles (msec): 00:21:35.484 | 1.00th=[ 584], 5.00th=[ 600], 10.00th=[ 625], 20.00th=[ 760], 00:21:35.484 | 30.00th=[ 953], 40.00th=[ 1028], 50.00th=[ 1083], 60.00th=[ 1099], 00:21:35.484 | 70.00th=[ 1200], 80.00th=[ 1318], 90.00th=[ 2265], 95.00th=[ 2467], 00:21:35.484 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2702], 99.95th=[ 2702], 00:21:35.484 | 99.99th=[ 2702] 00:21:35.484 bw ( KiB/s): min=16384, max=221184, per=2.91%, avg=119222.86, stdev=55920.75, samples=14 00:21:35.484 iops : min= 16, max= 216, avg=116.43, stdev=54.61, samples=14 00:21:35.484 lat (msec) : 50=0.11%, 750=18.66%, 1000=14.95%, 2000=52.28%, >=2000=14.00% 00:21:35.484 cpu : usr=0.09%, sys=1.63%, ctx=1894, majf=0, minf=32769 00:21:35.484 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:21:35.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.484 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.484 issued rwts: total=943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.484 job0: (groupid=0, jobs=1): err= 0: pid=2734929: Thu Jul 25 07:27:06 2024 00:21:35.484 read: IOPS=45, BW=45.6MiB/s (47.8MB/s)(470MiB/10314msec) 00:21:35.484 slat (usec): min=44, max=2120.4k, avg=21846.63, stdev=158687.60 00:21:35.484 clat (msec): min=43, max=10128, avg=2554.70, stdev=2380.78 00:21:35.484 lat (msec): min=586, max=10142, avg=2576.54, stdev=2381.82 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 584], 5.00th=[ 592], 10.00th=[ 600], 20.00th=[ 625], 00:21:35.485 | 30.00th=[ 667], 40.00th=[ 1003], 50.00th=[ 1485], 60.00th=[ 1921], 00:21:35.485 | 70.00th=[ 2333], 80.00th=[ 6208], 90.00th=[ 6477], 95.00th=[ 6611], 00:21:35.485 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[10134], 99.95th=[10134], 00:21:35.485 | 99.99th=[10134] 00:21:35.485 bw ( KiB/s): min= 2048, max=217088, per=1.90%, avg=77824.00, stdev=85576.02, samples=9 00:21:35.485 iops : min= 2, max= 212, avg=76.00, stdev=83.57, samples=9 00:21:35.485 lat (msec) : 50=0.21%, 750=32.98%, 1000=6.60%, 2000=22.55%, >=2000=37.66% 00:21:35.485 cpu : usr=0.01%, sys=1.33%, ctx=765, majf=0, minf=32769 00:21:35.485 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.485 issued rwts: total=470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job0: (groupid=0, jobs=1): err= 0: pid=2734930: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=8, BW=8580KiB/s (8786kB/s)(86.0MiB/10264msec) 00:21:35.485 slat (usec): min=931, max=2116.9k, avg=118896.93, stdev=439734.22 00:21:35.485 clat (msec): min=38, max=10261, avg=8816.95, stdev=1930.63 00:21:35.485 lat (msec): min=2124, max=10263, avg=8935.85, stdev=1682.57 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 39], 5.00th=[ 4279], 10.00th=[ 6477], 20.00th=[ 8658], 00:21:35.485 | 30.00th=[ 8926], 40.00th=[ 9060], 50.00th=[ 9194], 60.00th=[ 9463], 00:21:35.485 | 70.00th=[ 9731], 80.00th=[10000], 90.00th=[10268], 95.00th=[10268], 00:21:35.485 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.485 | 99.99th=[10268] 00:21:35.485 lat (msec) : 50=1.16%, >=2000=98.84% 00:21:35.485 cpu : usr=0.00%, sys=0.63%, ctx=244, majf=0, minf=22017 00:21:35.485 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:35.485 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734931: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=45, BW=45.9MiB/s (48.1MB/s)(471MiB/10260msec) 00:21:35.485 slat (usec): min=35, max=2079.6k, avg=21681.40, stdev=135230.82 00:21:35.485 clat (msec): min=44, max=5328, avg=2625.93, stdev=1431.49 00:21:35.485 lat (msec): min=881, max=5350, avg=2647.61, stdev=1426.20 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 885], 5.00th=[ 1167], 10.00th=[ 1318], 20.00th=[ 1485], 00:21:35.485 | 30.00th=[ 1770], 40.00th=[ 1938], 50.00th=[ 2072], 60.00th=[ 2165], 00:21:35.485 | 70.00th=[ 2333], 80.00th=[ 4530], 90.00th=[ 5201], 95.00th=[ 5201], 00:21:35.485 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:21:35.485 | 99.99th=[ 5336] 00:21:35.485 bw ( KiB/s): min= 8192, max=149504, per=1.32%, avg=54029.38, stdev=39533.96, samples=13 00:21:35.485 iops : min= 8, max= 146, avg=52.69, stdev=38.63, samples=13 00:21:35.485 lat (msec) : 50=0.21%, 1000=2.97%, 2000=39.07%, >=2000=57.75% 00:21:35.485 cpu : usr=0.03%, sys=1.23%, ctx=1162, majf=0, minf=32769 00:21:35.485 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.485 issued rwts: total=471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734932: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=219, BW=219MiB/s (230MB/s)(2209MiB/10066msec) 00:21:35.485 slat (usec): min=45, max=69471, avg=4521.27, stdev=7880.06 00:21:35.485 clat (msec): min=64, max=919, avg=557.94, stdev=157.96 00:21:35.485 lat (msec): min=66, max=925, avg=562.46, stdev=158.99 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 153], 5.00th=[ 288], 10.00th=[ 355], 20.00th=[ 409], 00:21:35.485 | 30.00th=[ 481], 40.00th=[ 514], 50.00th=[ 592], 60.00th=[ 617], 00:21:35.485 | 70.00th=[ 659], 80.00th=[ 709], 90.00th=[ 751], 95.00th=[ 793], 00:21:35.485 | 99.00th=[ 860], 99.50th=[ 869], 99.90th=[ 911], 99.95th=[ 919], 00:21:35.485 | 99.99th=[ 919] 00:21:35.485 bw ( KiB/s): min=67584, max=405504, per=5.47%, avg=224280.89, stdev=73589.92, samples=19 00:21:35.485 iops : min= 66, max= 396, avg=219.00, stdev=71.87, samples=19 00:21:35.485 lat (msec) : 100=0.45%, 250=1.40%, 500=36.85%, 750=51.29%, 1000=10.00% 00:21:35.485 cpu : usr=0.17%, sys=2.69%, ctx=2006, majf=0, minf=32769 00:21:35.485 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.485 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734933: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=49, BW=49.6MiB/s (52.1MB/s)(510MiB/10272msec) 00:21:35.485 slat (usec): min=46, max=1962.2k, avg=20077.14, stdev=113213.80 00:21:35.485 clat (msec): min=28, max=5623, avg=2300.04, stdev=1410.10 00:21:35.485 lat (msec): min=538, max=5631, avg=2320.12, stdev=1410.52 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 542], 5.00th=[ 567], 10.00th=[ 625], 20.00th=[ 1036], 00:21:35.485 | 30.00th=[ 1385], 40.00th=[ 1854], 50.00th=[ 2106], 60.00th=[ 2198], 00:21:35.485 | 70.00th=[ 2333], 80.00th=[ 3876], 90.00th=[ 4597], 95.00th=[ 5134], 00:21:35.485 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:21:35.485 | 99.99th=[ 5604] 00:21:35.485 bw ( KiB/s): min= 6144, max=229376, per=1.74%, avg=71121.45, stdev=65183.63, samples=11 00:21:35.485 iops : min= 6, max= 224, avg=69.45, stdev=63.66, samples=11 00:21:35.485 lat (msec) : 50=0.20%, 750=13.33%, 1000=6.08%, 2000=23.33%, >=2000=57.06% 00:21:35.485 cpu : usr=0.00%, sys=1.19%, ctx=1336, majf=0, minf=32769 00:21:35.485 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.485 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734934: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=68, BW=68.7MiB/s (72.1MB/s)(703MiB/10226msec) 00:21:35.485 slat (usec): min=35, max=1966.5k, avg=14507.72, stdev=104664.23 00:21:35.485 clat (msec): min=23, max=5321, avg=1708.49, stdev=1471.88 00:21:35.485 lat (msec): min=409, max=5324, avg=1723.00, stdev=1474.23 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 422], 5.00th=[ 489], 10.00th=[ 498], 20.00th=[ 550], 00:21:35.485 | 30.00th=[ 785], 40.00th=[ 961], 50.00th=[ 1116], 60.00th=[ 1351], 00:21:35.485 | 70.00th=[ 1703], 80.00th=[ 2056], 90.00th=[ 4597], 95.00th=[ 4933], 00:21:35.485 | 99.00th=[ 5269], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:21:35.485 | 99.99th=[ 5336] 00:21:35.485 bw ( KiB/s): min= 6144, max=270336, per=2.61%, avg=107054.55, stdev=98048.05, samples=11 00:21:35.485 iops : min= 6, max= 264, avg=104.55, stdev=95.75, samples=11 00:21:35.485 lat (msec) : 50=0.14%, 500=13.66%, 750=15.79%, 1000=14.65%, 2000=34.71% 00:21:35.485 lat (msec) : >=2000=21.05% 00:21:35.485 cpu : usr=0.02%, sys=1.30%, ctx=1421, majf=0, minf=32769 00:21:35.485 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.485 issued rwts: total=703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734935: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=2, BW=2997KiB/s (3069kB/s)(30.0MiB/10249msec) 00:21:35.485 slat (usec): min=1487, max=2090.5k, avg=339676.21, stdev=748388.15 00:21:35.485 clat (msec): min=57, max=10236, avg=5776.24, stdev=3001.26 00:21:35.485 lat (msec): min=2123, max=10248, avg=6115.92, stdev=2906.92 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 58], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:35.485 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 6409], 60.00th=[ 6477], 00:21:35.485 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:21:35.485 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.485 | 99.99th=[10268] 00:21:35.485 lat (msec) : 100=3.33%, >=2000=96.67% 00:21:35.485 cpu : usr=0.00%, sys=0.21%, ctx=71, majf=0, minf=7681 00:21:35.485 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:21:35.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:35.485 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.485 job1: (groupid=0, jobs=1): err= 0: pid=2734936: Thu Jul 25 07:27:06 2024 00:21:35.485 read: IOPS=93, BW=93.1MiB/s (97.7MB/s)(957MiB/10276msec) 00:21:35.485 slat (usec): min=43, max=2145.9k, avg=10697.30, stdev=93809.20 00:21:35.485 clat (msec): min=31, max=4805, avg=1261.35, stdev=1252.89 00:21:35.485 lat (msec): min=633, max=4815, avg=1272.05, stdev=1255.94 00:21:35.485 clat percentiles (msec): 00:21:35.485 | 1.00th=[ 634], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 642], 00:21:35.485 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 667], 60.00th=[ 693], 00:21:35.485 | 70.00th=[ 726], 80.00th=[ 1703], 90.00th=[ 4329], 95.00th=[ 4530], 00:21:35.485 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:21:35.485 | 99.99th=[ 4799] 00:21:35.485 bw ( KiB/s): min=38912, max=204800, per=4.14%, avg=169779.20, stdev=60856.58, samples=10 00:21:35.485 iops : min= 38, max= 200, avg=165.80, stdev=59.43, samples=10 00:21:35.486 lat (msec) : 50=0.10%, 750=72.83%, 1000=2.19%, 2000=9.82%, >=2000=15.05% 00:21:35.486 cpu : usr=0.09%, sys=2.08%, ctx=996, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.486 issued rwts: total=957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734937: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=28, BW=28.1MiB/s (29.4MB/s)(288MiB/10263msec) 00:21:35.486 slat (usec): min=537, max=2095.1k, avg=35409.70, stdev=230017.93 00:21:35.486 clat (msec): min=62, max=10167, avg=3667.53, stdev=2935.08 00:21:35.486 lat (msec): min=869, max=10181, avg=3702.94, stdev=2943.24 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 860], 5.00th=[ 869], 10.00th=[ 885], 20.00th=[ 902], 00:21:35.486 | 30.00th=[ 919], 40.00th=[ 961], 50.00th=[ 2140], 60.00th=[ 6342], 00:21:35.486 | 70.00th=[ 6745], 80.00th=[ 7013], 90.00th=[ 7215], 95.00th=[ 7349], 00:21:35.486 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.486 | 99.99th=[10134] 00:21:35.486 bw ( KiB/s): min= 4096, max=118784, per=1.33%, avg=54613.33, stdev=54948.52, samples=6 00:21:35.486 iops : min= 4, max= 116, avg=53.33, stdev=53.66, samples=6 00:21:35.486 lat (msec) : 100=0.35%, 1000=44.79%, 2000=3.12%, >=2000=51.74% 00:21:35.486 cpu : usr=0.02%, sys=1.01%, ctx=734, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.1%, >=64=78.1% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:35.486 issued rwts: total=288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734938: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=19, BW=19.7MiB/s (20.7MB/s)(202MiB/10247msec) 00:21:35.486 slat (usec): min=724, max=2095.1k, avg=50492.94, stdev=247514.49 00:21:35.486 clat (msec): min=45, max=8351, avg=5447.84, stdev=2325.60 00:21:35.486 lat (msec): min=1987, max=8359, avg=5498.33, stdev=2291.58 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 1989], 5.00th=[ 2089], 10.00th=[ 2232], 20.00th=[ 2534], 00:21:35.486 | 30.00th=[ 3004], 40.00th=[ 6342], 50.00th=[ 6678], 60.00th=[ 6812], 00:21:35.486 | 70.00th=[ 7080], 80.00th=[ 7550], 90.00th=[ 8020], 95.00th=[ 8154], 00:21:35.486 | 99.00th=[ 8288], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:21:35.486 | 99.99th=[ 8356] 00:21:35.486 bw ( KiB/s): min= 4096, max=77824, per=0.92%, avg=37888.00, stdev=37297.60, samples=4 00:21:35.486 iops : min= 4, max= 76, avg=37.00, stdev=36.42, samples=4 00:21:35.486 lat (msec) : 50=0.50%, 2000=1.98%, >=2000=97.52% 00:21:35.486 cpu : usr=0.01%, sys=0.98%, ctx=714, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:21:35.486 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734939: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=93, BW=93.2MiB/s (97.7MB/s)(957MiB/10271msec) 00:21:35.486 slat (usec): min=45, max=1971.2k, avg=10703.53, stdev=90259.82 00:21:35.486 clat (msec): min=23, max=4793, avg=1281.12, stdev=1341.28 00:21:35.486 lat (msec): min=241, max=4794, avg=1291.82, stdev=1343.69 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 309], 00:21:35.486 | 30.00th=[ 558], 40.00th=[ 667], 50.00th=[ 827], 60.00th=[ 894], 00:21:35.486 | 70.00th=[ 1250], 80.00th=[ 1620], 90.00th=[ 4530], 95.00th=[ 4732], 00:21:35.486 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:21:35.486 | 99.99th=[ 4799] 00:21:35.486 bw ( KiB/s): min=16384, max=411648, per=3.45%, avg=141456.42, stdev=136942.71, samples=12 00:21:35.486 iops : min= 16, max= 402, avg=138.08, stdev=133.73, samples=12 00:21:35.486 lat (msec) : 50=0.10%, 250=1.67%, 500=25.29%, 750=19.02%, 1000=18.60% 00:21:35.486 lat (msec) : 2000=21.32%, >=2000=14.00% 00:21:35.486 cpu : usr=0.01%, sys=1.56%, ctx=1591, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.486 issued rwts: total=957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734940: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=70, BW=70.0MiB/s (73.4MB/s)(717MiB/10240msec) 00:21:35.486 slat (usec): min=52, max=2094.3k, avg=14184.39, stdev=132871.46 00:21:35.486 clat (msec): min=64, max=7215, avg=1754.02, stdev=2308.68 00:21:35.486 lat (msec): min=608, max=7218, avg=1768.20, stdev=2314.98 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 609], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 625], 00:21:35.486 | 30.00th=[ 634], 40.00th=[ 642], 50.00th=[ 667], 60.00th=[ 676], 00:21:35.486 | 70.00th=[ 693], 80.00th=[ 919], 90.00th=[ 6812], 95.00th=[ 7013], 00:21:35.486 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:21:35.486 | 99.99th=[ 7215] 00:21:35.486 bw ( KiB/s): min=10240, max=208896, per=3.27%, avg=134030.22, stdev=80467.84, samples=9 00:21:35.486 iops : min= 10, max= 204, avg=130.89, stdev=78.58, samples=9 00:21:35.486 lat (msec) : 100=0.14%, 750=75.87%, 1000=4.46%, >=2000=19.53% 00:21:35.486 cpu : usr=0.06%, sys=1.66%, ctx=980, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.486 issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734941: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=78, BW=78.9MiB/s (82.8MB/s)(815MiB/10325msec) 00:21:35.486 slat (usec): min=56, max=1962.8k, avg=12572.72, stdev=82829.73 00:21:35.486 clat (msec): min=74, max=3724, avg=1523.63, stdev=987.81 00:21:35.486 lat (msec): min=386, max=3728, avg=1536.20, stdev=986.98 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 393], 5.00th=[ 447], 10.00th=[ 502], 20.00th=[ 659], 00:21:35.486 | 30.00th=[ 944], 40.00th=[ 1116], 50.00th=[ 1301], 60.00th=[ 1418], 00:21:35.486 | 70.00th=[ 1603], 80.00th=[ 2123], 90.00th=[ 3473], 95.00th=[ 3608], 00:21:35.486 | 99.00th=[ 3708], 99.50th=[ 3708], 99.90th=[ 3742], 99.95th=[ 3742], 00:21:35.486 | 99.99th=[ 3742] 00:21:35.486 bw ( KiB/s): min= 2048, max=307200, per=2.64%, avg=108228.92, stdev=90618.10, samples=13 00:21:35.486 iops : min= 2, max= 300, avg=105.69, stdev=88.49, samples=13 00:21:35.486 lat (msec) : 100=0.12%, 500=8.71%, 750=15.95%, 1000=7.98%, 2000=43.68% 00:21:35.486 lat (msec) : >=2000=23.56% 00:21:35.486 cpu : usr=0.02%, sys=1.52%, ctx=1584, majf=0, minf=32769 00:21:35.486 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.486 issued rwts: total=815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734942: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=6, BW=6319KiB/s (6471kB/s)(63.0MiB/10209msec) 00:21:35.486 slat (usec): min=409, max=2092.5k, avg=161085.79, stdev=536667.42 00:21:35.486 clat (msec): min=60, max=10206, avg=6223.07, stdev=3208.86 00:21:35.486 lat (msec): min=2093, max=10208, avg=6384.15, stdev=3148.67 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 61], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2140], 00:21:35.486 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:21:35.486 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:21:35.486 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.486 | 99.99th=[10268] 00:21:35.486 lat (msec) : 100=1.59%, >=2000=98.41% 00:21:35.486 cpu : usr=0.00%, sys=0.42%, ctx=72, majf=0, minf=16129 00:21:35.486 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.486 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job1: (groupid=0, jobs=1): err= 0: pid=2734943: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=3, BW=3979KiB/s (4075kB/s)(40.0MiB/10294msec) 00:21:35.486 slat (usec): min=892, max=2097.8k, avg=255741.25, stdev=665153.38 00:21:35.486 clat (msec): min=63, max=10291, avg=6053.45, stdev=3263.24 00:21:35.486 lat (msec): min=2138, max=10293, avg=6309.19, stdev=3181.60 00:21:35.486 clat percentiles (msec): 00:21:35.486 | 1.00th=[ 64], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:21:35.486 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6477], 00:21:35.486 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:35.486 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.486 | 99.99th=[10268] 00:21:35.486 lat (msec) : 100=2.50%, >=2000=97.50% 00:21:35.486 cpu : usr=0.00%, sys=0.42%, ctx=75, majf=0, minf=10241 00:21:35.486 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:21:35.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.486 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.486 job2: (groupid=0, jobs=1): err= 0: pid=2734944: Thu Jul 25 07:27:06 2024 00:21:35.486 read: IOPS=125, BW=125MiB/s (131MB/s)(1263MiB/10097msec) 00:21:35.486 slat (usec): min=47, max=2116.7k, avg=7913.05, stdev=64573.30 00:21:35.486 clat (msec): min=95, max=2941, avg=983.27, stdev=696.71 00:21:35.486 lat (msec): min=96, max=2941, avg=991.18, stdev=698.97 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 186], 5.00th=[ 477], 10.00th=[ 634], 20.00th=[ 642], 00:21:35.487 | 30.00th=[ 651], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 735], 00:21:35.487 | 70.00th=[ 776], 80.00th=[ 1485], 90.00th=[ 2769], 95.00th=[ 2869], 00:21:35.487 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:21:35.487 | 99.99th=[ 2937] 00:21:35.487 bw ( KiB/s): min= 8192, max=212992, per=3.79%, avg=155079.20, stdev=62882.38, samples=15 00:21:35.487 iops : min= 8, max= 208, avg=151.40, stdev=61.40, samples=15 00:21:35.487 lat (msec) : 100=0.24%, 250=1.35%, 500=3.64%, 750=57.32%, 1000=17.34% 00:21:35.487 lat (msec) : 2000=10.06%, >=2000=10.06% 00:21:35.487 cpu : usr=0.02%, sys=2.08%, ctx=1234, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.487 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734945: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=37, BW=38.0MiB/s (39.8MB/s)(387MiB/10189msec) 00:21:35.487 slat (usec): min=56, max=2139.6k, avg=26280.05, stdev=147408.62 00:21:35.487 clat (msec): min=16, max=8513, avg=3096.91, stdev=2276.32 00:21:35.487 lat (msec): min=597, max=8539, avg=3123.19, stdev=2281.02 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 659], 5.00th=[ 1020], 10.00th=[ 1183], 20.00th=[ 1250], 00:21:35.487 | 30.00th=[ 1351], 40.00th=[ 1519], 50.00th=[ 1787], 60.00th=[ 2123], 00:21:35.487 | 70.00th=[ 6141], 80.00th=[ 6275], 90.00th=[ 6342], 95.00th=[ 6544], 00:21:35.487 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 8490], 99.95th=[ 8490], 00:21:35.487 | 99.99th=[ 8490] 00:21:35.487 bw ( KiB/s): min= 2048, max=90112, per=1.08%, avg=44197.50, stdev=24791.66, samples=12 00:21:35.487 iops : min= 2, max= 88, avg=43.08, stdev=24.33, samples=12 00:21:35.487 lat (msec) : 20=0.26%, 750=2.07%, 1000=2.58%, 2000=50.13%, >=2000=44.96% 00:21:35.487 cpu : usr=0.03%, sys=0.99%, ctx=925, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.487 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734946: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=23, BW=23.8MiB/s (25.0MB/s)(239MiB/10044msec) 00:21:35.487 slat (usec): min=619, max=2147.2k, avg=41842.16, stdev=211025.07 00:21:35.487 clat (msec): min=42, max=9959, avg=3997.68, stdev=2896.43 00:21:35.487 lat (msec): min=43, max=9976, avg=4039.52, stdev=2906.64 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 46], 5.00th=[ 128], 10.00th=[ 275], 20.00th=[ 709], 00:21:35.487 | 30.00th=[ 1083], 40.00th=[ 1921], 50.00th=[ 6208], 60.00th=[ 6477], 00:21:35.487 | 70.00th=[ 6611], 80.00th=[ 6678], 90.00th=[ 6678], 95.00th=[ 6745], 00:21:35.487 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[10000], 99.95th=[10000], 00:21:35.487 | 99.99th=[10000] 00:21:35.487 bw ( KiB/s): min= 6144, max=59392, per=0.74%, avg=30310.40, stdev=19450.61, samples=5 00:21:35.487 iops : min= 6, max= 58, avg=29.60, stdev=18.99, samples=5 00:21:35.487 lat (msec) : 50=2.09%, 100=0.84%, 250=4.60%, 500=7.95%, 750=5.02% 00:21:35.487 lat (msec) : 1000=6.69%, 2000=17.57%, >=2000=55.23% 00:21:35.487 cpu : usr=0.00%, sys=0.80%, ctx=812, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.4%, >=64=73.6% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:21:35.487 issued rwts: total=239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734947: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=41, BW=41.1MiB/s (43.1MB/s)(419MiB/10192msec) 00:21:35.487 slat (usec): min=730, max=2143.8k, avg=24147.43, stdev=175822.49 00:21:35.487 clat (msec): min=71, max=7192, avg=2766.19, stdev=2643.88 00:21:35.487 lat (msec): min=748, max=7196, avg=2790.34, stdev=2645.12 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 751], 5.00th=[ 768], 10.00th=[ 793], 20.00th=[ 852], 00:21:35.487 | 30.00th=[ 894], 40.00th=[ 911], 50.00th=[ 1133], 60.00th=[ 1452], 00:21:35.487 | 70.00th=[ 4329], 80.00th=[ 6678], 90.00th=[ 6946], 95.00th=[ 7080], 00:21:35.487 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:21:35.487 | 99.99th=[ 7215] 00:21:35.487 bw ( KiB/s): min= 1620, max=172032, per=1.82%, avg=74698.50, stdev=70482.47, samples=8 00:21:35.487 iops : min= 1, max= 168, avg=72.88, stdev=68.92, samples=8 00:21:35.487 lat (msec) : 100=0.24%, 750=0.95%, 1000=43.44%, 2000=23.15%, >=2000=32.22% 00:21:35.487 cpu : usr=0.01%, sys=1.39%, ctx=1060, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.487 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734948: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(130MiB/10301msec) 00:21:35.487 slat (usec): min=451, max=2117.4k, avg=78680.77, stdev=335322.05 00:21:35.487 clat (msec): min=71, max=10296, avg=6897.93, stdev=3839.62 00:21:35.487 lat (msec): min=1247, max=10300, avg=6976.61, stdev=3803.28 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 1250], 5.00th=[ 1452], 10.00th=[ 1569], 20.00th=[ 1838], 00:21:35.487 | 30.00th=[ 2106], 40.00th=[ 8557], 50.00th=[ 9597], 60.00th=[ 9866], 00:21:35.487 | 70.00th=[10000], 80.00th=[10000], 90.00th=[10268], 95.00th=[10268], 00:21:35.487 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.487 | 99.99th=[10268] 00:21:35.487 bw ( KiB/s): min= 4096, max= 4096, per=0.10%, avg=4096.00, stdev= 0.00, samples=1 00:21:35.487 iops : min= 4, max= 4, avg= 4.00, stdev= 0.00, samples=1 00:21:35.487 lat (msec) : 100=0.77%, 2000=24.62%, >=2000=74.62% 00:21:35.487 cpu : usr=0.00%, sys=0.78%, ctx=324, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:21:35.487 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734949: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=45, BW=45.9MiB/s (48.2MB/s)(462MiB/10056msec) 00:21:35.487 slat (usec): min=37, max=2075.8k, avg=21640.39, stdev=145640.15 00:21:35.487 clat (msec): min=54, max=6937, avg=2574.22, stdev=2056.47 00:21:35.487 lat (msec): min=55, max=8179, avg=2595.86, stdev=2068.51 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 106], 5.00th=[ 292], 10.00th=[ 418], 20.00th=[ 651], 00:21:35.487 | 30.00th=[ 802], 40.00th=[ 978], 50.00th=[ 1905], 60.00th=[ 3138], 00:21:35.487 | 70.00th=[ 3339], 80.00th=[ 5000], 90.00th=[ 5134], 95.00th=[ 6544], 00:21:35.487 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:21:35.487 | 99.99th=[ 6946] 00:21:35.487 bw ( KiB/s): min= 4096, max=155648, per=1.66%, avg=68166.70, stdev=45702.26, samples=10 00:21:35.487 iops : min= 4, max= 152, avg=66.50, stdev=44.54, samples=10 00:21:35.487 lat (msec) : 100=0.87%, 250=4.11%, 500=8.01%, 750=13.85%, 1000=14.50% 00:21:35.487 lat (msec) : 2000=10.39%, >=2000=48.27% 00:21:35.487 cpu : usr=0.03%, sys=1.37%, ctx=837, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.487 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734950: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=58, BW=58.6MiB/s (61.4MB/s)(594MiB/10139msec) 00:21:35.487 slat (usec): min=110, max=2066.1k, avg=17062.45, stdev=101048.21 00:21:35.487 clat (usec): min=496, max=6473.1k, avg=1966965.53, stdev=1741896.76 00:21:35.487 lat (msec): min=716, max=6474, avg=1984.03, stdev=1746.50 00:21:35.487 clat percentiles (msec): 00:21:35.487 | 1.00th=[ 743], 5.00th=[ 768], 10.00th=[ 810], 20.00th=[ 827], 00:21:35.487 | 30.00th=[ 852], 40.00th=[ 1011], 50.00th=[ 1116], 60.00th=[ 1250], 00:21:35.487 | 70.00th=[ 1552], 80.00th=[ 4463], 90.00th=[ 5336], 95.00th=[ 5604], 00:21:35.487 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:21:35.487 | 99.99th=[ 6477] 00:21:35.487 bw ( KiB/s): min= 8192, max=174080, per=2.12%, avg=86760.73, stdev=56898.09, samples=11 00:21:35.487 iops : min= 8, max= 170, avg=84.73, stdev=55.56, samples=11 00:21:35.487 lat (usec) : 500=0.17% 00:21:35.487 lat (msec) : 750=2.19%, 1000=36.03%, 2000=38.55%, >=2000=23.06% 00:21:35.487 cpu : usr=0.01%, sys=1.59%, ctx=1510, majf=0, minf=32769 00:21:35.487 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:21:35.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.487 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.487 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.487 job2: (groupid=0, jobs=1): err= 0: pid=2734951: Thu Jul 25 07:27:06 2024 00:21:35.487 read: IOPS=203, BW=203MiB/s (213MB/s)(2098MiB/10318msec) 00:21:35.487 slat (usec): min=42, max=2066.2k, avg=4871.70, stdev=63402.09 00:21:35.487 clat (msec): min=85, max=4491, avg=587.57, stdev=957.60 00:21:35.487 lat (msec): min=124, max=4493, avg=592.44, stdev=960.83 00:21:35.487 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 130], 5.00th=[ 209], 10.00th=[ 247], 20.00th=[ 251], 00:21:35.488 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 368], 60.00th=[ 376], 00:21:35.488 | 70.00th=[ 384], 80.00th=[ 456], 90.00th=[ 609], 95.00th=[ 4396], 00:21:35.488 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:21:35.488 | 99.99th=[ 4463] 00:21:35.488 bw ( KiB/s): min= 2048, max=595968, per=8.21%, avg=336213.33, stdev=187447.86, samples=12 00:21:35.488 iops : min= 2, max= 582, avg=328.33, stdev=183.05, samples=12 00:21:35.488 lat (msec) : 100=0.05%, 250=19.78%, 500=64.20%, 750=7.48%, 1000=0.95% 00:21:35.488 lat (msec) : 2000=0.91%, >=2000=6.63% 00:21:35.488 cpu : usr=0.06%, sys=2.75%, ctx=2046, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.488 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job2: (groupid=0, jobs=1): err= 0: pid=2734952: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=135, BW=135MiB/s (142MB/s)(1359MiB/10045msec) 00:21:35.488 slat (usec): min=42, max=1599.3k, avg=7355.15, stdev=55404.90 00:21:35.488 clat (msec): min=42, max=3771, avg=821.76, stdev=866.54 00:21:35.488 lat (msec): min=45, max=3990, avg=829.11, stdev=873.16 00:21:35.488 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 186], 5.00th=[ 368], 10.00th=[ 372], 20.00th=[ 372], 00:21:35.488 | 30.00th=[ 376], 40.00th=[ 384], 50.00th=[ 502], 60.00th=[ 542], 00:21:35.488 | 70.00th=[ 651], 80.00th=[ 810], 90.00th=[ 1871], 95.00th=[ 3339], 00:21:35.488 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3775], 99.95th=[ 3775], 00:21:35.488 | 99.99th=[ 3775] 00:21:35.488 bw ( KiB/s): min= 6144, max=348160, per=4.27%, avg=175104.00, stdev=122724.33, samples=14 00:21:35.488 iops : min= 6, max= 340, avg=171.00, stdev=119.85, samples=14 00:21:35.488 lat (msec) : 50=0.37%, 100=0.15%, 250=1.18%, 500=48.05%, 750=27.45% 00:21:35.488 lat (msec) : 1000=7.14%, 2000=6.03%, >=2000=9.64% 00:21:35.488 cpu : usr=0.02%, sys=1.63%, ctx=1814, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.488 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job2: (groupid=0, jobs=1): err= 0: pid=2734953: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=57, BW=57.2MiB/s (60.0MB/s)(585MiB/10227msec) 00:21:35.488 slat (usec): min=44, max=1998.0k, avg=17357.98, stdev=115788.31 00:21:35.488 clat (msec): min=69, max=4860, avg=2050.88, stdev=1454.94 00:21:35.488 lat (msec): min=381, max=4861, avg=2068.24, stdev=1454.89 00:21:35.488 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 384], 5.00th=[ 430], 10.00th=[ 472], 20.00th=[ 567], 00:21:35.488 | 30.00th=[ 885], 40.00th=[ 1401], 50.00th=[ 1854], 60.00th=[ 2198], 00:21:35.488 | 70.00th=[ 2400], 80.00th=[ 4212], 90.00th=[ 4597], 95.00th=[ 4732], 00:21:35.488 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:21:35.488 | 99.99th=[ 4866] 00:21:35.488 bw ( KiB/s): min=12288, max=301056, per=2.08%, avg=85085.09, stdev=84656.32, samples=11 00:21:35.488 iops : min= 12, max= 294, avg=83.09, stdev=82.67, samples=11 00:21:35.488 lat (msec) : 100=0.17%, 500=13.16%, 750=13.16%, 1000=5.98%, 2000=22.05% 00:21:35.488 lat (msec) : >=2000=45.47% 00:21:35.488 cpu : usr=0.04%, sys=1.45%, ctx=1627, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.488 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job2: (groupid=0, jobs=1): err= 0: pid=2734954: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=29, BW=29.4MiB/s (30.9MB/s)(297MiB/10087msec) 00:21:35.488 slat (usec): min=122, max=2143.6k, avg=33778.23, stdev=211259.95 00:21:35.488 clat (msec): min=52, max=8704, avg=4070.73, stdev=3518.04 00:21:35.488 lat (msec): min=100, max=8705, avg=4104.51, stdev=3520.92 00:21:35.488 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 103], 5.00th=[ 326], 10.00th=[ 617], 20.00th=[ 936], 00:21:35.488 | 30.00th=[ 1003], 40.00th=[ 1167], 50.00th=[ 1653], 60.00th=[ 7349], 00:21:35.488 | 70.00th=[ 7886], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8658], 00:21:35.488 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:21:35.488 | 99.99th=[ 8658] 00:21:35.488 bw ( KiB/s): min= 4096, max=120832, per=1.05%, avg=43155.12, stdev=36356.93, samples=8 00:21:35.488 iops : min= 4, max= 118, avg=42.12, stdev=35.50, samples=8 00:21:35.488 lat (msec) : 100=0.34%, 250=3.37%, 500=3.70%, 750=4.04%, 1000=17.85% 00:21:35.488 lat (msec) : 2000=25.59%, >=2000=45.12% 00:21:35.488 cpu : usr=0.03%, sys=1.06%, ctx=599, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.8% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:35.488 issued rwts: total=297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job2: (groupid=0, jobs=1): err= 0: pid=2734955: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=13, BW=13.8MiB/s (14.4MB/s)(141MiB/10233msec) 00:21:35.488 slat (usec): min=462, max=2161.1k, avg=70923.04, stdev=328571.58 00:21:35.488 clat (msec): min=231, max=10169, avg=3869.72, stdev=4062.40 00:21:35.488 lat (msec): min=233, max=10179, avg=3940.64, stdev=4085.62 00:21:35.488 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 234], 5.00th=[ 351], 10.00th=[ 498], 20.00th=[ 709], 00:21:35.488 | 30.00th=[ 953], 40.00th=[ 1250], 50.00th=[ 1519], 60.00th=[ 1787], 00:21:35.488 | 70.00th=[ 6409], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:21:35.488 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.488 | 99.99th=[10134] 00:21:35.488 bw ( KiB/s): min=28672, max=28672, per=0.70%, avg=28672.00, stdev= 0.00, samples=1 00:21:35.488 iops : min= 28, max= 28, avg=28.00, stdev= 0.00, samples=1 00:21:35.488 lat (msec) : 250=2.84%, 500=7.80%, 750=12.06%, 1000=8.51%, 2000=34.04% 00:21:35.488 lat (msec) : >=2000=34.75% 00:21:35.488 cpu : usr=0.00%, sys=0.70%, ctx=380, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.7%, 16=11.3%, 32=22.7%, >=64=55.3% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.7% 00:21:35.488 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job2: (groupid=0, jobs=1): err= 0: pid=2734956: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=28, BW=28.2MiB/s (29.6MB/s)(289MiB/10231msec) 00:21:35.488 slat (usec): min=428, max=2118.2k, avg=35334.78, stdev=196882.35 00:21:35.488 clat (msec): min=16, max=8390, avg=4204.38, stdev=2714.22 00:21:35.488 lat (msec): min=1229, max=8412, avg=4239.72, stdev=2710.50 00:21:35.488 clat percentiles (msec): 00:21:35.488 | 1.00th=[ 1234], 5.00th=[ 1318], 10.00th=[ 1385], 20.00th=[ 1586], 00:21:35.488 | 30.00th=[ 1888], 40.00th=[ 2089], 50.00th=[ 2400], 60.00th=[ 6477], 00:21:35.488 | 70.00th=[ 6745], 80.00th=[ 7349], 90.00th=[ 7886], 95.00th=[ 8154], 00:21:35.488 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8423], 99.95th=[ 8423], 00:21:35.488 | 99.99th=[ 8423] 00:21:35.488 bw ( KiB/s): min= 4096, max=59392, per=0.89%, avg=36636.44, stdev=20259.79, samples=9 00:21:35.488 iops : min= 4, max= 58, avg=35.78, stdev=19.78, samples=9 00:21:35.488 lat (msec) : 20=0.35%, 2000=34.95%, >=2000=64.71% 00:21:35.488 cpu : usr=0.04%, sys=1.04%, ctx=881, majf=0, minf=32769 00:21:35.488 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.1%, >=64=78.2% 00:21:35.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.488 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:35.488 issued rwts: total=289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.488 job3: (groupid=0, jobs=1): err= 0: pid=2734957: Thu Jul 25 07:27:06 2024 00:21:35.488 read: IOPS=47, BW=48.0MiB/s (50.3MB/s)(483MiB/10067msec) 00:21:35.489 slat (usec): min=42, max=2143.9k, avg=20720.20, stdev=136527.40 00:21:35.489 clat (msec): min=56, max=6169, avg=2483.69, stdev=2126.10 00:21:35.489 lat (msec): min=103, max=6178, avg=2504.41, stdev=2131.12 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 192], 5.00th=[ 617], 10.00th=[ 860], 20.00th=[ 1150], 00:21:35.489 | 30.00th=[ 1267], 40.00th=[ 1334], 50.00th=[ 1385], 60.00th=[ 1519], 00:21:35.489 | 70.00th=[ 1569], 80.00th=[ 6007], 90.00th=[ 6074], 95.00th=[ 6074], 00:21:35.489 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:21:35.489 | 99.99th=[ 6141] 00:21:35.489 bw ( KiB/s): min= 4096, max=133120, per=1.48%, avg=60586.67, stdev=43711.15, samples=12 00:21:35.489 iops : min= 4, max= 130, avg=59.17, stdev=42.69, samples=12 00:21:35.489 lat (msec) : 100=0.21%, 250=1.45%, 500=2.28%, 750=4.97%, 1000=6.21% 00:21:35.489 lat (msec) : 2000=57.97%, >=2000=26.92% 00:21:35.489 cpu : usr=0.01%, sys=1.14%, ctx=1264, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.489 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734958: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=120, BW=121MiB/s (127MB/s)(1218MiB/10076msec) 00:21:35.489 slat (usec): min=44, max=69575, avg=8205.66, stdev=11840.07 00:21:35.489 clat (msec): min=71, max=3378, avg=943.25, stdev=826.41 00:21:35.489 lat (msec): min=88, max=3394, avg=951.45, stdev=832.50 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 112], 5.00th=[ 338], 10.00th=[ 472], 20.00th=[ 477], 00:21:35.489 | 30.00th=[ 493], 40.00th=[ 550], 50.00th=[ 609], 60.00th=[ 659], 00:21:35.489 | 70.00th=[ 718], 80.00th=[ 1116], 90.00th=[ 2635], 95.00th=[ 3138], 00:21:35.489 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3373], 99.95th=[ 3373], 00:21:35.489 | 99.99th=[ 3373] 00:21:35.489 bw ( KiB/s): min= 8192, max=274432, per=3.41%, avg=139626.75, stdev=96178.33, samples=16 00:21:35.489 iops : min= 8, max= 268, avg=136.31, stdev=93.91, samples=16 00:21:35.489 lat (msec) : 100=0.41%, 250=2.71%, 500=29.56%, 750=39.82%, 1000=4.27% 00:21:35.489 lat (msec) : 2000=10.10%, >=2000=13.14% 00:21:35.489 cpu : usr=0.07%, sys=2.58%, ctx=2112, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.489 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734959: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=8, BW=9136KiB/s (9355kB/s)(91.0MiB/10200msec) 00:21:35.489 slat (usec): min=376, max=2104.6k, avg=111101.08, stdev=431977.87 00:21:35.489 clat (msec): min=88, max=10134, avg=7163.32, stdev=3069.84 00:21:35.489 lat (msec): min=2120, max=10198, avg=7274.42, stdev=2992.94 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 89], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:21:35.489 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 8658], 60.00th=[ 9597], 00:21:35.489 | 70.00th=[ 9731], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10134], 00:21:35.489 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.489 | 99.99th=[10134] 00:21:35.489 lat (msec) : 100=1.10%, >=2000=98.90% 00:21:35.489 cpu : usr=0.00%, sys=0.64%, ctx=142, majf=0, minf=23297 00:21:35.489 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.8%, 16=17.6%, 32=35.2%, >=64=30.8% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:35.489 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734960: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=34, BW=34.0MiB/s (35.7MB/s)(342MiB/10053msec) 00:21:35.489 slat (usec): min=53, max=2139.2k, avg=29247.30, stdev=160739.11 00:21:35.489 clat (msec): min=48, max=7386, avg=3364.95, stdev=1667.97 00:21:35.489 lat (msec): min=56, max=7402, avg=3394.20, stdev=1676.99 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 66], 5.00th=[ 359], 10.00th=[ 785], 20.00th=[ 1502], 00:21:35.489 | 30.00th=[ 3104], 40.00th=[ 3339], 50.00th=[ 3540], 60.00th=[ 3876], 00:21:35.489 | 70.00th=[ 4396], 80.00th=[ 5269], 90.00th=[ 5403], 95.00th=[ 5470], 00:21:35.489 | 99.00th=[ 5604], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:21:35.489 | 99.99th=[ 7416] 00:21:35.489 bw ( KiB/s): min=30720, max=65536, per=1.19%, avg=48808.33, stdev=12006.72, samples=9 00:21:35.489 iops : min= 30, max= 64, avg=47.56, stdev=11.70, samples=9 00:21:35.489 lat (msec) : 50=0.29%, 100=1.46%, 250=1.75%, 500=3.51%, 750=2.63% 00:21:35.489 lat (msec) : 1000=2.92%, 2000=15.20%, >=2000=72.22% 00:21:35.489 cpu : usr=0.02%, sys=0.97%, ctx=1083, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.6% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:35.489 issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734961: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=39, BW=39.1MiB/s (41.0MB/s)(393MiB/10063msec) 00:21:35.489 slat (usec): min=73, max=2106.2k, avg=25562.21, stdev=180425.69 00:21:35.489 clat (msec): min=14, max=8068, avg=3075.06, stdev=3083.69 00:21:35.489 lat (msec): min=64, max=8072, avg=3100.62, stdev=3091.81 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 83], 5.00th=[ 180], 10.00th=[ 284], 20.00th=[ 558], 00:21:35.489 | 30.00th=[ 986], 40.00th=[ 1028], 50.00th=[ 1150], 60.00th=[ 1586], 00:21:35.489 | 70.00th=[ 5201], 80.00th=[ 7483], 90.00th=[ 7953], 95.00th=[ 8020], 00:21:35.489 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:21:35.489 | 99.99th=[ 8087] 00:21:35.489 bw ( KiB/s): min=20480, max=145730, per=1.63%, avg=66884.38, stdev=45207.92, samples=8 00:21:35.489 iops : min= 20, max= 142, avg=65.25, stdev=44.04, samples=8 00:21:35.489 lat (msec) : 20=0.25%, 100=3.56%, 250=4.33%, 500=10.94%, 750=7.89% 00:21:35.489 lat (msec) : 1000=6.87%, 2000=28.50%, >=2000=37.66% 00:21:35.489 cpu : usr=0.00%, sys=1.27%, ctx=605, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.489 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734962: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(372MiB/10074msec) 00:21:35.489 slat (usec): min=452, max=2099.7k, avg=26909.50, stdev=152449.85 00:21:35.489 clat (msec): min=61, max=6241, avg=3042.25, stdev=2169.93 00:21:35.489 lat (msec): min=73, max=6254, avg=3069.16, stdev=2173.97 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 84], 5.00th=[ 309], 10.00th=[ 709], 20.00th=[ 1401], 00:21:35.489 | 30.00th=[ 1871], 40.00th=[ 1938], 50.00th=[ 2056], 60.00th=[ 2140], 00:21:35.489 | 70.00th=[ 6007], 80.00th=[ 6074], 90.00th=[ 6141], 95.00th=[ 6141], 00:21:35.489 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:21:35.489 | 99.99th=[ 6275] 00:21:35.489 bw ( KiB/s): min=47104, max=88064, per=1.52%, avg=62422.75, stdev=12604.55, samples=8 00:21:35.489 iops : min= 46, max= 86, avg=60.88, stdev=12.37, samples=8 00:21:35.489 lat (msec) : 100=2.15%, 250=1.88%, 500=2.96%, 750=3.76%, 1000=4.57% 00:21:35.489 lat (msec) : 2000=28.76%, >=2000=55.91% 00:21:35.489 cpu : usr=0.03%, sys=0.98%, ctx=1355, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.1% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.489 issued rwts: total=372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734963: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=42, BW=43.0MiB/s (45.1MB/s)(432MiB/10048msec) 00:21:35.489 slat (usec): min=110, max=1967.5k, avg=23157.93, stdev=104092.63 00:21:35.489 clat (msec): min=41, max=5128, avg=2627.50, stdev=1525.03 00:21:35.489 lat (msec): min=52, max=5135, avg=2650.66, stdev=1527.98 00:21:35.489 clat percentiles (msec): 00:21:35.489 | 1.00th=[ 70], 5.00th=[ 321], 10.00th=[ 810], 20.00th=[ 1653], 00:21:35.489 | 30.00th=[ 1854], 40.00th=[ 2005], 50.00th=[ 2106], 60.00th=[ 2232], 00:21:35.489 | 70.00th=[ 3104], 80.00th=[ 4866], 90.00th=[ 5000], 95.00th=[ 5067], 00:21:35.489 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:21:35.489 | 99.99th=[ 5134] 00:21:35.489 bw ( KiB/s): min=14336, max=98304, per=1.26%, avg=51753.92, stdev=22524.94, samples=12 00:21:35.489 iops : min= 14, max= 96, avg=50.50, stdev=21.98, samples=12 00:21:35.489 lat (msec) : 50=0.23%, 100=2.08%, 250=2.08%, 500=2.31%, 750=2.55% 00:21:35.489 lat (msec) : 1000=3.94%, 2000=27.31%, >=2000=59.49% 00:21:35.489 cpu : usr=0.01%, sys=1.14%, ctx=1475, majf=0, minf=32769 00:21:35.489 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.4% 00:21:35.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.489 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.489 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.489 job3: (groupid=0, jobs=1): err= 0: pid=2734964: Thu Jul 25 07:27:06 2024 00:21:35.489 read: IOPS=34, BW=34.8MiB/s (36.5MB/s)(350MiB/10066msec) 00:21:35.489 slat (usec): min=37, max=2151.7k, avg=28624.82, stdev=177757.73 00:21:35.489 clat (msec): min=45, max=7209, avg=1913.35, stdev=1754.25 00:21:35.489 lat (msec): min=67, max=7219, avg=1941.98, stdev=1773.95 00:21:35.489 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 120], 5.00th=[ 355], 10.00th=[ 625], 20.00th=[ 1053], 00:21:35.490 | 30.00th=[ 1133], 40.00th=[ 1301], 50.00th=[ 1368], 60.00th=[ 1536], 00:21:35.490 | 70.00th=[ 1804], 80.00th=[ 2072], 90.00th=[ 3608], 95.00th=[ 7148], 00:21:35.490 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:21:35.490 | 99.99th=[ 7215] 00:21:35.490 bw ( KiB/s): min= 4096, max=137216, per=1.58%, avg=64933.86, stdev=41246.97, samples=7 00:21:35.490 iops : min= 4, max= 134, avg=63.29, stdev=40.30, samples=7 00:21:35.490 lat (msec) : 50=0.29%, 100=0.29%, 250=2.86%, 500=4.29%, 750=4.00% 00:21:35.490 lat (msec) : 1000=3.71%, 2000=61.43%, >=2000=23.14% 00:21:35.490 cpu : usr=0.00%, sys=0.92%, ctx=1072, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.0% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.490 issued rwts: total=350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job3: (groupid=0, jobs=1): err= 0: pid=2734965: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=76, BW=76.5MiB/s (80.2MB/s)(772MiB/10089msec) 00:21:35.490 slat (usec): min=46, max=2024.3k, avg=12972.16, stdev=91446.19 00:21:35.490 clat (msec): min=70, max=5470, avg=1593.44, stdev=1512.24 00:21:35.490 lat (msec): min=98, max=5523, avg=1606.42, stdev=1516.49 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 257], 5.00th=[ 550], 10.00th=[ 584], 20.00th=[ 625], 00:21:35.490 | 30.00th=[ 651], 40.00th=[ 768], 50.00th=[ 877], 60.00th=[ 961], 00:21:35.490 | 70.00th=[ 1603], 80.00th=[ 1804], 90.00th=[ 4732], 95.00th=[ 5336], 00:21:35.490 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:21:35.490 | 99.99th=[ 5470] 00:21:35.490 bw ( KiB/s): min=16384, max=227328, per=2.30%, avg=94270.29, stdev=69152.35, samples=14 00:21:35.490 iops : min= 16, max= 222, avg=92.00, stdev=67.50, samples=14 00:21:35.490 lat (msec) : 100=0.26%, 250=0.65%, 500=1.81%, 750=36.14%, 1000=21.89% 00:21:35.490 lat (msec) : 2000=22.80%, >=2000=16.45% 00:21:35.490 cpu : usr=0.03%, sys=1.61%, ctx=1127, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.490 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job3: (groupid=0, jobs=1): err= 0: pid=2734966: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=36, BW=36.1MiB/s (37.8MB/s)(363MiB/10063msec) 00:21:35.490 slat (usec): min=51, max=2116.7k, avg=27582.69, stdev=166102.58 00:21:35.490 clat (msec): min=48, max=7276, avg=3336.16, stdev=2529.36 00:21:35.490 lat (msec): min=74, max=7304, avg=3363.74, stdev=2534.96 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 96], 5.00th=[ 305], 10.00th=[ 676], 20.00th=[ 1183], 00:21:35.490 | 30.00th=[ 1284], 40.00th=[ 1586], 50.00th=[ 3004], 60.00th=[ 3104], 00:21:35.490 | 70.00th=[ 5604], 80.00th=[ 6074], 90.00th=[ 7215], 95.00th=[ 7215], 00:21:35.490 | 99.00th=[ 7282], 99.50th=[ 7282], 99.90th=[ 7282], 99.95th=[ 7282], 00:21:35.490 | 99.99th=[ 7282] 00:21:35.490 bw ( KiB/s): min= 4096, max=75776, per=1.17%, avg=48105.80, stdev=26059.91, samples=10 00:21:35.490 iops : min= 4, max= 74, avg=46.80, stdev=25.40, samples=10 00:21:35.490 lat (msec) : 50=0.28%, 100=0.83%, 250=2.75%, 500=3.86%, 750=3.03% 00:21:35.490 lat (msec) : 1000=3.03%, 2000=35.81%, >=2000=50.41% 00:21:35.490 cpu : usr=0.03%, sys=1.49%, ctx=931, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.490 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job3: (groupid=0, jobs=1): err= 0: pid=2734967: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=60, BW=60.7MiB/s (63.6MB/s)(610MiB/10051msec) 00:21:35.490 slat (usec): min=523, max=210794, avg=16392.03, stdev=16727.97 00:21:35.490 clat (msec): min=47, max=3216, avg=1833.72, stdev=754.44 00:21:35.490 lat (msec): min=51, max=3242, avg=1850.11, stdev=757.02 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 78], 5.00th=[ 439], 10.00th=[ 1020], 20.00th=[ 1267], 00:21:35.490 | 30.00th=[ 1469], 40.00th=[ 1586], 50.00th=[ 1754], 60.00th=[ 1972], 00:21:35.490 | 70.00th=[ 2232], 80.00th=[ 2567], 90.00th=[ 2970], 95.00th=[ 3071], 00:21:35.490 | 99.00th=[ 3171], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 3205], 00:21:35.490 | 99.99th=[ 3205] 00:21:35.490 bw ( KiB/s): min=28672, max=129024, per=1.61%, avg=65884.07, stdev=27571.29, samples=15 00:21:35.490 iops : min= 28, max= 126, avg=64.33, stdev=26.92, samples=15 00:21:35.490 lat (msec) : 50=0.16%, 100=1.15%, 250=2.13%, 500=2.13%, 750=2.62% 00:21:35.490 lat (msec) : 1000=1.64%, 2000=52.30%, >=2000=37.87% 00:21:35.490 cpu : usr=0.02%, sys=1.69%, ctx=2225, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.490 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job3: (groupid=0, jobs=1): err= 0: pid=2734968: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=32, BW=32.4MiB/s (34.0MB/s)(326MiB/10052msec) 00:21:35.490 slat (usec): min=454, max=2179.0k, avg=30687.26, stdev=168167.82 00:21:35.490 clat (msec): min=45, max=6617, avg=3635.19, stdev=2333.42 00:21:35.490 lat (msec): min=60, max=6628, avg=3665.88, stdev=2333.88 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 75], 5.00th=[ 464], 10.00th=[ 869], 20.00th=[ 1636], 00:21:35.490 | 30.00th=[ 2165], 40.00th=[ 2299], 50.00th=[ 2433], 60.00th=[ 2668], 00:21:35.490 | 70.00th=[ 6342], 80.00th=[ 6544], 90.00th=[ 6611], 95.00th=[ 6611], 00:21:35.490 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:21:35.490 | 99.99th=[ 6611] 00:21:35.490 bw ( KiB/s): min= 4096, max=83968, per=0.99%, avg=40747.00, stdev=23989.62, samples=10 00:21:35.490 iops : min= 4, max= 82, avg=39.70, stdev=23.43, samples=10 00:21:35.490 lat (msec) : 50=0.31%, 100=0.92%, 250=1.53%, 500=3.07%, 750=3.07% 00:21:35.490 lat (msec) : 1000=2.45%, 2000=12.58%, >=2000=76.07% 00:21:35.490 cpu : usr=0.01%, sys=1.18%, ctx=1406, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.8%, >=64=80.7% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:35.490 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job3: (groupid=0, jobs=1): err= 0: pid=2734969: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=70, BW=70.5MiB/s (73.9MB/s)(711MiB/10092msec) 00:21:35.490 slat (usec): min=488, max=2023.5k, avg=14091.28, stdev=76510.41 00:21:35.490 clat (msec): min=69, max=4307, avg=1637.88, stdev=1046.37 00:21:35.490 lat (msec): min=92, max=4309, avg=1651.97, stdev=1048.98 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 326], 5.00th=[ 827], 10.00th=[ 844], 20.00th=[ 885], 00:21:35.490 | 30.00th=[ 944], 40.00th=[ 1070], 50.00th=[ 1116], 60.00th=[ 1301], 00:21:35.490 | 70.00th=[ 1871], 80.00th=[ 2232], 90.00th=[ 3540], 95.00th=[ 4111], 00:21:35.490 | 99.00th=[ 4279], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:35.490 | 99.99th=[ 4329] 00:21:35.490 bw ( KiB/s): min= 2048, max=161792, per=2.24%, avg=91835.77, stdev=51626.72, samples=13 00:21:35.490 iops : min= 2, max= 158, avg=89.62, stdev=50.46, samples=13 00:21:35.490 lat (msec) : 100=0.28%, 250=0.56%, 500=1.13%, 750=1.41%, 1000=31.50% 00:21:35.490 lat (msec) : 2000=40.08%, >=2000=25.04% 00:21:35.490 cpu : usr=0.02%, sys=1.53%, ctx=1756, majf=0, minf=32769 00:21:35.490 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.490 issued rwts: total=711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job4: (groupid=0, jobs=1): err= 0: pid=2734970: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=3, BW=3884KiB/s (3977kB/s)(39.0MiB/10283msec) 00:21:35.490 slat (usec): min=1392, max=2091.3k, avg=261406.02, stdev=666350.77 00:21:35.490 clat (msec): min=87, max=10277, avg=6914.84, stdev=3444.25 00:21:35.490 lat (msec): min=2129, max=10282, avg=7176.25, stdev=3296.11 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 88], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 2198], 00:21:35.490 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10134], 00:21:35.490 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:35.490 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.490 | 99.99th=[10268] 00:21:35.490 lat (msec) : 100=2.56%, >=2000=97.44% 00:21:35.490 cpu : usr=0.00%, sys=0.33%, ctx=96, majf=0, minf=9985 00:21:35.490 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:21:35.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.490 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.490 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.490 job4: (groupid=0, jobs=1): err= 0: pid=2734971: Thu Jul 25 07:27:06 2024 00:21:35.490 read: IOPS=0, BW=1003KiB/s (1027kB/s)(10.0MiB/10207msec) 00:21:35.490 slat (msec): min=15, max=2153, avg=1012.38, stdev=1037.90 00:21:35.490 clat (msec): min=82, max=10157, avg=6258.83, stdev=3822.51 00:21:35.490 lat (msec): min=2104, max=10206, avg=7271.21, stdev=3311.52 00:21:35.490 clat percentiles (msec): 00:21:35.490 | 1.00th=[ 83], 5.00th=[ 83], 10.00th=[ 83], 20.00th=[ 2106], 00:21:35.490 | 30.00th=[ 2198], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8557], 00:21:35.490 | 70.00th=[ 8557], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:21:35.491 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.491 | 99.99th=[10134] 00:21:35.491 lat (msec) : 100=10.00%, >=2000=90.00% 00:21:35.491 cpu : usr=0.00%, sys=0.07%, ctx=60, majf=0, minf=2561 00:21:35.491 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734972: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=1, BW=1902KiB/s (1948kB/s)(19.0MiB/10229msec) 00:21:35.491 slat (msec): min=4, max=2119, avg=533.77, stdev=890.57 00:21:35.491 clat (msec): min=87, max=10168, avg=5041.30, stdev=3173.56 00:21:35.491 lat (msec): min=2122, max=10228, avg=5575.08, stdev=3146.78 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 88], 5.00th=[ 88], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:35.491 | 30.00th=[ 2198], 40.00th=[ 2198], 50.00th=[ 4329], 60.00th=[ 6477], 00:21:35.491 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10134], 95.00th=[10134], 00:21:35.491 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:35.491 | 99.99th=[10134] 00:21:35.491 lat (msec) : 100=5.26%, >=2000=94.74% 00:21:35.491 cpu : usr=0.00%, sys=0.13%, ctx=76, majf=0, minf=4865 00:21:35.491 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:35.491 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734973: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=54, BW=54.4MiB/s (57.0MB/s)(546MiB/10043msec) 00:21:35.491 slat (usec): min=388, max=2102.9k, avg=18313.96, stdev=141637.04 00:21:35.491 clat (msec): min=41, max=6804, avg=1112.38, stdev=1153.59 00:21:35.491 lat (msec): min=43, max=6827, avg=1130.69, stdev=1178.38 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 62], 5.00th=[ 133], 10.00th=[ 347], 20.00th=[ 785], 00:21:35.491 | 30.00th=[ 936], 40.00th=[ 953], 50.00th=[ 969], 60.00th=[ 995], 00:21:35.491 | 70.00th=[ 1036], 80.00th=[ 1133], 90.00th=[ 1150], 95.00th=[ 1167], 00:21:35.491 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:21:35.491 | 99.99th=[ 6812] 00:21:35.491 bw ( KiB/s): min=77824, max=156634, per=2.97%, avg=121558.00, stdev=25820.52, samples=7 00:21:35.491 iops : min= 76, max= 152, avg=118.57, stdev=25.00, samples=7 00:21:35.491 lat (msec) : 50=0.55%, 100=3.11%, 250=4.21%, 500=6.23%, 750=5.13% 00:21:35.491 lat (msec) : 1000=41.39%, 2000=34.98%, >=2000=4.40% 00:21:35.491 cpu : usr=0.04%, sys=0.98%, ctx=1082, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.491 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734974: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=35, BW=35.5MiB/s (37.2MB/s)(358MiB/10082msec) 00:21:35.491 slat (usec): min=42, max=2142.9k, avg=27937.42, stdev=204933.19 00:21:35.491 clat (msec): min=77, max=8651, avg=1405.98, stdev=2213.07 00:21:35.491 lat (msec): min=89, max=8654, avg=1433.91, stdev=2245.09 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 92], 5.00th=[ 186], 10.00th=[ 288], 20.00th=[ 531], 00:21:35.491 | 30.00th=[ 743], 40.00th=[ 768], 50.00th=[ 776], 60.00th=[ 776], 00:21:35.491 | 70.00th=[ 802], 80.00th=[ 810], 90.00th=[ 2869], 95.00th=[ 8557], 00:21:35.491 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:21:35.491 | 99.99th=[ 8658] 00:21:35.491 bw ( KiB/s): min=141312, max=165291, per=3.83%, avg=156814.33, stdev=13445.00, samples=3 00:21:35.491 iops : min= 138, max= 161, avg=153.00, stdev=13.00, samples=3 00:21:35.491 lat (msec) : 100=1.68%, 250=5.59%, 500=12.01%, 750=11.17%, 1000=57.82% 00:21:35.491 lat (msec) : >=2000=11.73% 00:21:35.491 cpu : usr=0.00%, sys=1.04%, ctx=348, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.4% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.491 issued rwts: total=358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734975: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=242, BW=243MiB/s (255MB/s)(2434MiB/10022msec) 00:21:35.491 slat (usec): min=43, max=1010.7k, avg=4104.86, stdev=22162.80 00:21:35.491 clat (msec): min=18, max=2460, avg=497.04, stdev=341.88 00:21:35.491 lat (msec): min=21, max=3471, avg=501.14, stdev=346.02 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 69], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 253], 00:21:35.491 | 30.00th=[ 255], 40.00th=[ 351], 50.00th=[ 376], 60.00th=[ 477], 00:21:35.491 | 70.00th=[ 625], 80.00th=[ 709], 90.00th=[ 776], 95.00th=[ 1636], 00:21:35.491 | 99.00th=[ 1670], 99.50th=[ 1670], 99.90th=[ 1687], 99.95th=[ 2433], 00:21:35.491 | 99.99th=[ 2467] 00:21:35.491 bw ( KiB/s): min=28672, max=518144, per=6.40%, avg=262205.94, stdev=151003.74, samples=18 00:21:35.491 iops : min= 28, max= 506, avg=256.06, stdev=147.46, samples=18 00:21:35.491 lat (msec) : 20=0.04%, 50=0.53%, 100=1.07%, 250=7.07%, 500=52.38% 00:21:35.491 lat (msec) : 750=26.79%, 1000=6.82%, 2000=5.22%, >=2000=0.08% 00:21:35.491 cpu : usr=0.05%, sys=3.00%, ctx=2348, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.491 issued rwts: total=2434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734976: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=49, BW=49.8MiB/s (52.2MB/s)(500MiB/10045msec) 00:21:35.491 slat (usec): min=481, max=2111.7k, avg=19999.66, stdev=148396.21 00:21:35.491 clat (msec): min=43, max=7003, avg=1289.39, stdev=1427.63 00:21:35.491 lat (msec): min=47, max=7011, avg=1309.39, stdev=1449.25 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 69], 5.00th=[ 251], 10.00th=[ 468], 20.00th=[ 835], 00:21:35.491 | 30.00th=[ 961], 40.00th=[ 1011], 50.00th=[ 1062], 60.00th=[ 1083], 00:21:35.491 | 70.00th=[ 1099], 80.00th=[ 1116], 90.00th=[ 1200], 95.00th=[ 6879], 00:21:35.491 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:21:35.491 | 99.99th=[ 7013] 00:21:35.491 bw ( KiB/s): min=32768, max=147161, per=2.66%, avg=109087.00, stdev=38040.42, samples=7 00:21:35.491 iops : min= 32, max= 143, avg=106.43, stdev=37.03, samples=7 00:21:35.491 lat (msec) : 50=0.60%, 100=0.60%, 250=3.60%, 500=5.80%, 750=7.00% 00:21:35.491 lat (msec) : 1000=18.80%, 2000=57.40%, >=2000=6.20% 00:21:35.491 cpu : usr=0.00%, sys=1.23%, ctx=1071, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.491 issued rwts: total=500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734977: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=15, BW=15.7MiB/s (16.4MB/s)(160MiB/10204msec) 00:21:35.491 slat (usec): min=48, max=2129.0k, avg=63370.03, stdev=318856.90 00:21:35.491 clat (msec): min=63, max=8690, avg=2469.11, stdev=1978.89 00:21:35.491 lat (msec): min=820, max=8747, avg=2532.48, stdev=2027.61 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 818], 5.00th=[ 1284], 10.00th=[ 1351], 20.00th=[ 1452], 00:21:35.491 | 30.00th=[ 1536], 40.00th=[ 1620], 50.00th=[ 1703], 60.00th=[ 1972], 00:21:35.491 | 70.00th=[ 2056], 80.00th=[ 2165], 90.00th=[ 5067], 95.00th=[ 8490], 00:21:35.491 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:21:35.491 | 99.99th=[ 8658] 00:21:35.491 bw ( KiB/s): min=65536, max=65536, per=1.60%, avg=65536.00, stdev= 0.00, samples=1 00:21:35.491 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=1 00:21:35.491 lat (msec) : 100=0.62%, 1000=3.75%, 2000=63.75%, >=2000=31.88% 00:21:35.491 cpu : usr=0.02%, sys=0.88%, ctx=157, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:21:35.491 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.491 job4: (groupid=0, jobs=1): err= 0: pid=2734978: Thu Jul 25 07:27:06 2024 00:21:35.491 read: IOPS=34, BW=34.4MiB/s (36.1MB/s)(351MiB/10205msec) 00:21:35.491 slat (usec): min=62, max=2118.6k, avg=28819.29, stdev=190470.33 00:21:35.491 clat (msec): min=87, max=6621, avg=3489.75, stdev=1832.00 00:21:35.491 lat (msec): min=940, max=6630, avg=3518.57, stdev=1830.32 00:21:35.491 clat percentiles (msec): 00:21:35.491 | 1.00th=[ 936], 5.00th=[ 953], 10.00th=[ 961], 20.00th=[ 1028], 00:21:35.491 | 30.00th=[ 2140], 40.00th=[ 2467], 50.00th=[ 3742], 60.00th=[ 4044], 00:21:35.491 | 70.00th=[ 5201], 80.00th=[ 5269], 90.00th=[ 5403], 95.00th=[ 6544], 00:21:35.491 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:21:35.491 | 99.99th=[ 6611] 00:21:35.491 bw ( KiB/s): min=10240, max=126976, per=1.59%, avg=65243.43, stdev=49172.31, samples=7 00:21:35.491 iops : min= 10, max= 124, avg=63.71, stdev=48.02, samples=7 00:21:35.491 lat (msec) : 100=0.28%, 1000=19.37%, 2000=7.69%, >=2000=72.65% 00:21:35.491 cpu : usr=0.05%, sys=1.19%, ctx=726, majf=0, minf=32769 00:21:35.491 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.1% 00:21:35.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.491 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.491 issued rwts: total=351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job4: (groupid=0, jobs=1): err= 0: pid=2734979: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=20, BW=20.3MiB/s (21.2MB/s)(208MiB/10271msec) 00:21:35.492 slat (usec): min=54, max=2118.6k, avg=48949.42, stdev=279693.23 00:21:35.492 clat (msec): min=88, max=9455, avg=5879.06, stdev=3493.28 00:21:35.492 lat (msec): min=1061, max=9464, avg=5928.01, stdev=3474.85 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 1062], 5.00th=[ 1062], 10.00th=[ 1070], 20.00th=[ 1116], 00:21:35.492 | 30.00th=[ 2089], 40.00th=[ 5269], 50.00th=[ 7483], 60.00th=[ 8792], 00:21:35.492 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9329], 00:21:35.492 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:21:35.492 | 99.99th=[ 9463] 00:21:35.492 bw ( KiB/s): min=12288, max=75776, per=0.80%, avg=32768.00, stdev=25082.77, samples=5 00:21:35.492 iops : min= 12, max= 74, avg=32.00, stdev=24.49, samples=5 00:21:35.492 lat (msec) : 100=0.48%, 2000=26.92%, >=2000=72.60% 00:21:35.492 cpu : usr=0.01%, sys=1.03%, ctx=443, majf=0, minf=32769 00:21:35.492 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:21:35.492 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job4: (groupid=0, jobs=1): err= 0: pid=2734980: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=5, BW=5480KiB/s (5612kB/s)(55.0MiB/10277msec) 00:21:35.492 slat (usec): min=712, max=2099.2k, avg=185685.19, stdev=564808.07 00:21:35.492 clat (msec): min=63, max=10275, avg=7710.53, stdev=3501.98 00:21:35.492 lat (msec): min=1990, max=10276, avg=7896.21, stdev=3356.77 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 64], 5.00th=[ 2005], 10.00th=[ 2022], 20.00th=[ 2089], 00:21:35.492 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10134], 60.00th=[10268], 00:21:35.492 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:35.492 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.492 | 99.99th=[10268] 00:21:35.492 lat (msec) : 100=1.82%, 2000=1.82%, >=2000=96.36% 00:21:35.492 cpu : usr=0.00%, sys=0.47%, ctx=113, majf=0, minf=14081 00:21:35.492 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.492 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job4: (groupid=0, jobs=1): err= 0: pid=2734981: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=54, BW=54.7MiB/s (57.3MB/s)(556MiB/10173msec) 00:21:35.492 slat (usec): min=42, max=2114.8k, avg=18165.37, stdev=166132.16 00:21:35.492 clat (msec): min=70, max=6212, avg=1037.69, stdev=984.16 00:21:35.492 lat (msec): min=485, max=6435, avg=1055.86, stdev=1023.45 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 485], 5.00th=[ 489], 10.00th=[ 489], 20.00th=[ 489], 00:21:35.492 | 30.00th=[ 493], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 523], 00:21:35.492 | 70.00th=[ 531], 80.00th=[ 2265], 90.00th=[ 2500], 95.00th=[ 2635], 00:21:35.492 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 6208], 99.95th=[ 6208], 00:21:35.492 | 99.99th=[ 6208] 00:21:35.492 bw ( KiB/s): min=98304, max=270336, per=5.35%, avg=219136.00, stdev=80924.08, samples=4 00:21:35.492 iops : min= 96, max= 264, avg=214.00, stdev=79.03, samples=4 00:21:35.492 lat (msec) : 100=0.18%, 500=53.60%, 750=20.68%, >=2000=25.54% 00:21:35.492 cpu : usr=0.05%, sys=1.19%, ctx=481, majf=0, minf=32769 00:21:35.492 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.7% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.492 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job4: (groupid=0, jobs=1): err= 0: pid=2734982: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=3, BW=3399KiB/s (3481kB/s)(34.0MiB/10242msec) 00:21:35.492 slat (usec): min=1611, max=2115.0k, avg=299344.23, stdev=694055.77 00:21:35.492 clat (msec): min=63, max=10222, avg=6584.98, stdev=3147.53 00:21:35.492 lat (msec): min=2074, max=10241, avg=6884.32, stdev=2988.49 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 64], 5.00th=[ 2072], 10.00th=[ 4111], 20.00th=[ 4178], 00:21:35.492 | 30.00th=[ 4212], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 8557], 00:21:35.492 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:21:35.492 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:35.492 | 99.99th=[10268] 00:21:35.492 lat (msec) : 100=2.94%, >=2000=97.06% 00:21:35.492 cpu : usr=0.00%, sys=0.22%, ctx=109, majf=0, minf=8705 00:21:35.492 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:35.492 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job5: (groupid=0, jobs=1): err= 0: pid=2734984: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(571MiB/10054msec) 00:21:35.492 slat (usec): min=55, max=2082.0k, avg=17524.95, stdev=139083.87 00:21:35.492 clat (msec): min=43, max=7077, avg=976.35, stdev=1066.32 00:21:35.492 lat (msec): min=61, max=7079, avg=993.87, stdev=1095.64 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 71], 5.00th=[ 472], 10.00th=[ 498], 20.00th=[ 502], 00:21:35.492 | 30.00th=[ 542], 40.00th=[ 584], 50.00th=[ 751], 60.00th=[ 927], 00:21:35.492 | 70.00th=[ 986], 80.00th=[ 1116], 90.00th=[ 1334], 95.00th=[ 1469], 00:21:35.492 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:21:35.492 | 99.99th=[ 7080] 00:21:35.492 bw ( KiB/s): min=63503, max=258048, per=3.69%, avg=151213.17, stdev=82064.29, samples=6 00:21:35.492 iops : min= 62, max= 252, avg=147.67, stdev=80.14, samples=6 00:21:35.492 lat (msec) : 50=0.18%, 100=1.05%, 250=2.10%, 500=13.84%, 750=32.75% 00:21:35.492 lat (msec) : 1000=21.54%, 2000=24.87%, >=2000=3.68% 00:21:35.492 cpu : usr=0.00%, sys=1.30%, ctx=1095, majf=0, minf=32769 00:21:35.492 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.492 issued rwts: total=571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job5: (groupid=0, jobs=1): err= 0: pid=2734985: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=135, BW=136MiB/s (143MB/s)(1365MiB/10038msec) 00:21:35.492 slat (usec): min=42, max=1938.0k, avg=7333.60, stdev=53479.26 00:21:35.492 clat (msec): min=18, max=2706, avg=855.12, stdev=637.77 00:21:35.492 lat (msec): min=54, max=2709, avg=862.45, stdev=640.55 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 73], 5.00th=[ 359], 10.00th=[ 472], 20.00th=[ 477], 00:21:35.492 | 30.00th=[ 510], 40.00th=[ 584], 50.00th=[ 609], 60.00th=[ 667], 00:21:35.492 | 70.00th=[ 735], 80.00th=[ 1167], 90.00th=[ 1552], 95.00th=[ 2668], 00:21:35.492 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:21:35.492 | 99.99th=[ 2702] 00:21:35.492 bw ( KiB/s): min=63488, max=280576, per=4.33%, avg=177545.85, stdev=72448.20, samples=13 00:21:35.492 iops : min= 62, max= 274, avg=173.38, stdev=70.75, samples=13 00:21:35.492 lat (msec) : 20=0.07%, 100=1.17%, 250=2.27%, 500=19.71%, 750=48.06% 00:21:35.492 lat (msec) : 1000=5.20%, 2000=14.21%, >=2000=9.30% 00:21:35.492 cpu : usr=0.14%, sys=2.08%, ctx=1447, majf=0, minf=32769 00:21:35.492 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.492 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job5: (groupid=0, jobs=1): err= 0: pid=2734986: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=67, BW=67.0MiB/s (70.3MB/s)(677MiB/10104msec) 00:21:35.492 slat (usec): min=281, max=2051.8k, avg=14785.79, stdev=84398.01 00:21:35.492 clat (msec): min=90, max=4220, avg=1728.03, stdev=1164.66 00:21:35.492 lat (msec): min=114, max=4224, avg=1742.81, stdev=1168.11 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 171], 5.00th=[ 334], 10.00th=[ 793], 20.00th=[ 986], 00:21:35.492 | 30.00th=[ 1045], 40.00th=[ 1167], 50.00th=[ 1368], 60.00th=[ 1452], 00:21:35.492 | 70.00th=[ 1620], 80.00th=[ 2106], 90.00th=[ 3977], 95.00th=[ 4144], 00:21:35.492 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 4212], 99.95th=[ 4212], 00:21:35.492 | 99.99th=[ 4212] 00:21:35.492 bw ( KiB/s): min= 8192, max=137216, per=1.96%, avg=80296.79, stdev=45151.41, samples=14 00:21:35.492 iops : min= 8, max= 134, avg=78.36, stdev=44.07, samples=14 00:21:35.492 lat (msec) : 100=0.15%, 250=2.81%, 500=4.58%, 750=2.22%, 1000=12.11% 00:21:35.492 lat (msec) : 2000=55.10%, >=2000=23.04% 00:21:35.492 cpu : usr=0.02%, sys=1.62%, ctx=1912, majf=0, minf=31935 00:21:35.492 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:21:35.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.492 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.492 issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.492 job5: (groupid=0, jobs=1): err= 0: pid=2734987: Thu Jul 25 07:27:06 2024 00:21:35.492 read: IOPS=43, BW=43.8MiB/s (45.9MB/s)(439MiB/10034msec) 00:21:35.492 slat (usec): min=126, max=2108.3k, avg=22782.37, stdev=153282.04 00:21:35.492 clat (msec): min=30, max=6904, avg=1363.82, stdev=1291.68 00:21:35.492 lat (msec): min=34, max=6922, avg=1386.60, stdev=1318.31 00:21:35.492 clat percentiles (msec): 00:21:35.492 | 1.00th=[ 48], 5.00th=[ 116], 10.00th=[ 393], 20.00th=[ 835], 00:21:35.492 | 30.00th=[ 961], 40.00th=[ 1020], 50.00th=[ 1167], 60.00th=[ 1267], 00:21:35.493 | 70.00th=[ 1385], 80.00th=[ 1485], 90.00th=[ 1703], 95.00th=[ 5470], 00:21:35.493 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:21:35.493 | 99.99th=[ 6879] 00:21:35.493 bw ( KiB/s): min=22528, max=167936, per=2.21%, avg=90502.71, stdev=49267.15, samples=7 00:21:35.493 iops : min= 22, max= 164, avg=88.29, stdev=48.09, samples=7 00:21:35.493 lat (msec) : 50=1.14%, 100=2.73%, 250=4.78%, 500=2.96%, 750=3.64% 00:21:35.493 lat (msec) : 1000=22.78%, 2000=56.26%, >=2000=5.69% 00:21:35.493 cpu : usr=0.00%, sys=1.04%, ctx=1299, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.6% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.493 issued rwts: total=439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734988: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=128, BW=128MiB/s (134MB/s)(1285MiB/10024msec) 00:21:35.493 slat (usec): min=42, max=2149.0k, avg=7779.82, stdev=93147.71 00:21:35.493 clat (msec): min=20, max=4166, avg=647.54, stdev=735.56 00:21:35.493 lat (msec): min=24, max=4177, avg=655.32, stdev=743.84 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 61], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 330], 00:21:35.493 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 409], 60.00th=[ 456], 00:21:35.493 | 70.00th=[ 481], 80.00th=[ 498], 90.00th=[ 2500], 95.00th=[ 2601], 00:21:35.493 | 99.00th=[ 2635], 99.50th=[ 4111], 99.90th=[ 4144], 99.95th=[ 4178], 00:21:35.493 | 99.99th=[ 4178] 00:21:35.493 bw ( KiB/s): min=225280, max=391168, per=7.23%, avg=296317.38, stdev=56844.57, samples=8 00:21:35.493 iops : min= 220, max= 382, avg=289.25, stdev=55.60, samples=8 00:21:35.493 lat (msec) : 50=0.54%, 100=1.32%, 250=5.60%, 500=73.07%, 750=7.94% 00:21:35.493 lat (msec) : >=2000=11.52% 00:21:35.493 cpu : usr=0.09%, sys=1.91%, ctx=1165, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.493 issued rwts: total=1285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734989: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=73, BW=73.4MiB/s (77.0MB/s)(752MiB/10241msec) 00:21:35.493 slat (usec): min=43, max=2068.6k, avg=13485.21, stdev=140198.17 00:21:35.493 clat (msec): min=93, max=6193, avg=964.37, stdev=1214.02 00:21:35.493 lat (msec): min=372, max=6197, avg=977.86, stdev=1229.03 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 372], 5.00th=[ 376], 10.00th=[ 376], 20.00th=[ 380], 00:21:35.493 | 30.00th=[ 380], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 384], 00:21:35.493 | 70.00th=[ 388], 80.00th=[ 2265], 90.00th=[ 2467], 95.00th=[ 2534], 00:21:35.493 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:21:35.493 | 99.99th=[ 6208] 00:21:35.493 bw ( KiB/s): min=28672, max=344064, per=6.24%, avg=255590.40, stdev=136890.07, samples=5 00:21:35.493 iops : min= 28, max= 336, avg=249.60, stdev=133.68, samples=5 00:21:35.493 lat (msec) : 100=0.13%, 500=76.99%, >=2000=22.87% 00:21:35.493 cpu : usr=0.00%, sys=2.01%, ctx=647, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:35.493 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734990: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=43, BW=43.9MiB/s (46.0MB/s)(443MiB/10091msec) 00:21:35.493 slat (usec): min=44, max=2100.6k, avg=22589.65, stdev=156573.71 00:21:35.493 clat (msec): min=81, max=7166, avg=1601.32, stdev=1742.43 00:21:35.493 lat (msec): min=99, max=7171, avg=1623.91, stdev=1760.76 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 112], 5.00th=[ 271], 10.00th=[ 592], 20.00th=[ 701], 00:21:35.493 | 30.00th=[ 768], 40.00th=[ 911], 50.00th=[ 1062], 60.00th=[ 1318], 00:21:35.493 | 70.00th=[ 1586], 80.00th=[ 1720], 90.00th=[ 1838], 95.00th=[ 7080], 00:21:35.493 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:21:35.493 | 99.99th=[ 7148] 00:21:35.493 bw ( KiB/s): min= 8192, max=233472, per=2.25%, avg=92134.86, stdev=73777.05, samples=7 00:21:35.493 iops : min= 8, max= 228, avg=89.86, stdev=72.06, samples=7 00:21:35.493 lat (msec) : 100=0.45%, 250=4.06%, 500=3.16%, 750=19.41%, 1000=19.41% 00:21:35.493 lat (msec) : 2000=44.02%, >=2000=9.48% 00:21:35.493 cpu : usr=0.02%, sys=1.21%, ctx=1530, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.493 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734991: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=48, BW=48.8MiB/s (51.1MB/s)(490MiB/10045msec) 00:21:35.493 slat (usec): min=483, max=2082.1k, avg=20405.93, stdev=147617.92 00:21:35.493 clat (msec): min=43, max=6894, avg=1082.81, stdev=758.30 00:21:35.493 lat (msec): min=57, max=6902, avg=1103.21, stdev=802.08 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 70], 5.00th=[ 257], 10.00th=[ 388], 20.00th=[ 793], 00:21:35.493 | 30.00th=[ 953], 40.00th=[ 1070], 50.00th=[ 1099], 60.00th=[ 1167], 00:21:35.493 | 70.00th=[ 1167], 80.00th=[ 1183], 90.00th=[ 1217], 95.00th=[ 1234], 00:21:35.493 | 99.00th=[ 5336], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:21:35.493 | 99.99th=[ 6879] 00:21:35.493 bw ( KiB/s): min=36864, max=141380, per=2.59%, avg=105920.57, stdev=38646.81, samples=7 00:21:35.493 iops : min= 36, max= 138, avg=103.43, stdev=37.73, samples=7 00:21:35.493 lat (msec) : 50=0.20%, 100=2.04%, 250=2.45%, 500=8.78%, 750=5.51% 00:21:35.493 lat (msec) : 1000=13.06%, 2000=64.69%, >=2000=3.27% 00:21:35.493 cpu : usr=0.00%, sys=0.92%, ctx=1527, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.493 issued rwts: total=490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734992: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=42, BW=42.3MiB/s (44.4MB/s)(427MiB/10091msec) 00:21:35.493 slat (usec): min=34, max=2104.8k, avg=23425.24, stdev=160569.82 00:21:35.493 clat (msec): min=86, max=7027, avg=1449.36, stdev=1408.54 00:21:35.493 lat (msec): min=142, max=7035, avg=1472.78, stdev=1433.13 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 146], 5.00th=[ 211], 10.00th=[ 288], 20.00th=[ 709], 00:21:35.493 | 30.00th=[ 969], 40.00th=[ 1028], 50.00th=[ 1133], 60.00th=[ 1217], 00:21:35.493 | 70.00th=[ 1502], 80.00th=[ 1804], 90.00th=[ 1989], 95.00th=[ 5403], 00:21:35.493 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:21:35.493 | 99.99th=[ 7013] 00:21:35.493 bw ( KiB/s): min=16384, max=159744, per=2.14%, avg=87758.57, stdev=54919.84, samples=7 00:21:35.493 iops : min= 16, max= 156, avg=85.57, stdev=53.75, samples=7 00:21:35.493 lat (msec) : 100=0.23%, 250=7.03%, 500=10.30%, 750=2.81%, 1000=16.63% 00:21:35.493 lat (msec) : 2000=56.21%, >=2000=6.79% 00:21:35.493 cpu : usr=0.00%, sys=0.99%, ctx=1352, majf=0, minf=32769 00:21:35.493 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.2% 00:21:35.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.493 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.493 issued rwts: total=427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.493 job5: (groupid=0, jobs=1): err= 0: pid=2734994: Thu Jul 25 07:27:06 2024 00:21:35.493 read: IOPS=49, BW=49.4MiB/s (51.8MB/s)(499MiB/10096msec) 00:21:35.493 slat (usec): min=120, max=2113.5k, avg=20058.88, stdev=133566.61 00:21:35.493 clat (msec): min=83, max=5874, avg=2446.09, stdev=1935.99 00:21:35.493 lat (msec): min=145, max=5893, avg=2466.15, stdev=1940.08 00:21:35.493 clat percentiles (msec): 00:21:35.493 | 1.00th=[ 186], 5.00th=[ 468], 10.00th=[ 709], 20.00th=[ 1250], 00:21:35.493 | 30.00th=[ 1351], 40.00th=[ 1418], 50.00th=[ 1502], 60.00th=[ 1754], 00:21:35.493 | 70.00th=[ 1871], 80.00th=[ 5604], 90.00th=[ 5738], 95.00th=[ 5805], 00:21:35.493 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:21:35.493 | 99.99th=[ 5873] 00:21:35.494 bw ( KiB/s): min= 2048, max=110592, per=1.55%, avg=63317.33, stdev=32877.88, samples=12 00:21:35.494 iops : min= 2, max= 108, avg=61.83, stdev=32.11, samples=12 00:21:35.494 lat (msec) : 100=0.20%, 250=1.40%, 500=4.61%, 750=4.21%, 1000=2.20% 00:21:35.494 lat (msec) : 2000=60.32%, >=2000=27.05% 00:21:35.494 cpu : usr=0.02%, sys=1.24%, ctx=1395, majf=0, minf=32769 00:21:35.494 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:21:35.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.494 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.494 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.494 job5: (groupid=0, jobs=1): err= 0: pid=2734995: Thu Jul 25 07:27:06 2024 00:21:35.494 read: IOPS=39, BW=39.8MiB/s (41.8MB/s)(400MiB/10038msec) 00:21:35.494 slat (usec): min=487, max=2103.3k, avg=25017.26, stdev=165426.00 00:21:35.494 clat (msec): min=29, max=6920, avg=1482.02, stdev=1275.09 00:21:35.494 lat (msec): min=44, max=6928, avg=1507.03, stdev=1302.10 00:21:35.494 clat percentiles (msec): 00:21:35.494 | 1.00th=[ 53], 5.00th=[ 201], 10.00th=[ 542], 20.00th=[ 927], 00:21:35.494 | 30.00th=[ 1083], 40.00th=[ 1267], 50.00th=[ 1368], 60.00th=[ 1452], 00:21:35.494 | 70.00th=[ 1502], 80.00th=[ 1569], 90.00th=[ 1703], 95.00th=[ 3205], 00:21:35.494 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:21:35.494 | 99.99th=[ 6946] 00:21:35.494 bw ( KiB/s): min=45056, max=126976, per=1.95%, avg=79872.00, stdev=24519.05, samples=7 00:21:35.494 iops : min= 44, max= 124, avg=78.00, stdev=23.94, samples=7 00:21:35.494 lat (msec) : 50=0.75%, 100=1.50%, 250=3.50%, 500=3.50%, 750=5.75% 00:21:35.494 lat (msec) : 1000=9.50%, 2000=70.25%, >=2000=5.25% 00:21:35.494 cpu : usr=0.00%, sys=1.12%, ctx=1700, majf=0, minf=32769 00:21:35.494 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:21:35.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.494 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:35.494 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.494 job5: (groupid=0, jobs=1): err= 0: pid=2734996: Thu Jul 25 07:27:06 2024 00:21:35.494 read: IOPS=43, BW=43.9MiB/s (46.0MB/s)(445MiB/10143msec) 00:21:35.494 slat (usec): min=77, max=2146.3k, avg=22688.94, stdev=175108.09 00:21:35.494 clat (msec): min=41, max=8502, avg=1885.09, stdev=1690.89 00:21:35.494 lat (msec): min=628, max=8528, avg=1907.78, stdev=1711.55 00:21:35.494 clat percentiles (msec): 00:21:35.494 | 1.00th=[ 625], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:21:35.494 | 30.00th=[ 634], 40.00th=[ 642], 50.00th=[ 642], 60.00th=[ 718], 00:21:35.494 | 70.00th=[ 2735], 80.00th=[ 4329], 90.00th=[ 4530], 95.00th=[ 4665], 00:21:35.494 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 8490], 99.95th=[ 8490], 00:21:35.494 | 99.99th=[ 8490] 00:21:35.494 bw ( KiB/s): min=14336, max=208896, per=2.64%, avg=108202.67, stdev=87805.64, samples=6 00:21:35.494 iops : min= 14, max= 204, avg=105.67, stdev=85.75, samples=6 00:21:35.494 lat (msec) : 50=0.22%, 750=60.67%, 2000=0.22%, >=2000=38.88% 00:21:35.494 cpu : usr=0.06%, sys=1.53%, ctx=460, majf=0, minf=32769 00:21:35.494 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:21:35.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.494 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.494 issued rwts: total=445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.494 job5: (groupid=0, jobs=1): err= 0: pid=2734997: Thu Jul 25 07:27:06 2024 00:21:35.494 read: IOPS=41, BW=41.0MiB/s (43.0MB/s)(414MiB/10095msec) 00:21:35.494 slat (usec): min=43, max=2139.8k, avg=24181.45, stdev=164433.11 00:21:35.494 clat (msec): min=82, max=7071, avg=1457.56, stdev=1342.63 00:21:35.494 lat (msec): min=100, max=7089, avg=1481.74, stdev=1369.32 00:21:35.494 clat percentiles (msec): 00:21:35.494 | 1.00th=[ 171], 5.00th=[ 426], 10.00th=[ 535], 20.00th=[ 659], 00:21:35.494 | 30.00th=[ 877], 40.00th=[ 1167], 50.00th=[ 1267], 60.00th=[ 1318], 00:21:35.494 | 70.00th=[ 1519], 80.00th=[ 1687], 90.00th=[ 1854], 95.00th=[ 5470], 00:21:35.494 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7080], 99.95th=[ 7080], 00:21:35.494 | 99.99th=[ 7080] 00:21:35.494 bw ( KiB/s): min=12288, max=163840, per=2.04%, avg=83663.14, stdev=50391.67, samples=7 00:21:35.494 iops : min= 12, max= 160, avg=81.57, stdev=49.34, samples=7 00:21:35.494 lat (msec) : 100=0.24%, 250=1.69%, 500=7.49%, 750=13.53%, 1000=13.04% 00:21:35.494 lat (msec) : 2000=58.21%, >=2000=5.80% 00:21:35.494 cpu : usr=0.02%, sys=0.93%, ctx=1278, majf=0, minf=32769 00:21:35.494 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:21:35.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.494 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:35.494 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.494 00:21:35.494 Run status group 0 (all jobs): 00:21:35.494 READ: bw=4001MiB/s (4196MB/s), 1003KiB/s-243MiB/s (1027kB/s-255MB/s), io=40.3GiB (43.3GB), run=10022-10325msec 00:21:35.494 00:21:35.494 Disk stats (read/write): 00:21:35.494 nvme0n1: ios=40829/0, merge=0/0, ticks=4981547/0, in_queue=4981547, util=98.09% 00:21:35.494 nvme1n1: ios=63316/0, merge=0/0, ticks=7746587/0, in_queue=7746587, util=98.41% 00:21:35.494 nvme2n1: ios=65918/0, merge=0/0, ticks=6320995/0, in_queue=6320995, util=98.58% 00:21:35.494 nvme3n1: ios=51628/0, merge=0/0, ticks=6251454/0, in_queue=6251454, util=98.54% 00:21:35.494 nvme4n1: ios=42057/0, merge=0/0, ticks=6683012/0, in_queue=6683012, util=98.63% 00:21:35.494 nvme5n1: ios=65649/0, merge=0/0, ticks=7942629/0, in_queue=7942629, util=99.20% 00:21:35.494 07:27:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:21:35.494 07:27:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:21:35.494 07:27:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:35.494 07:27:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:21:35.494 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:35.494 07:27:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:36.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:36.427 07:27:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:37.360 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:37.360 07:27:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:38.297 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:38.297 07:27:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:39.234 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:39.234 07:27:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:40.169 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:40.169 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:21:40.169 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.169 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.169 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:40.428 rmmod nvme_rdma 00:21:40.428 rmmod nvme_fabrics 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2733317 ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2733317 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 2733317 ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 2733317 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2733317 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2733317' 00:21:40.428 killing process with pid 2733317 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 2733317 00:21:40.428 07:27:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 2733317 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:40.688 00:21:40.688 real 0m33.650s 00:21:40.688 user 1m50.878s 00:21:40.688 sys 0m18.061s 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:40.688 ************************************ 00:21:40.688 END TEST nvmf_srq_overwhelm 00:21:40.688 ************************************ 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.688 07:27:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.948 ************************************ 00:21:40.948 START TEST nvmf_shutdown 00:21:40.948 ************************************ 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:40.948 * Looking for test storage... 00:21:40.948 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.948 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:40.949 ************************************ 00:21:40.949 START TEST nvmf_shutdown_tc1 00:21:40.949 ************************************ 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:40.949 07:27:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.003 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:51.004 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:51.004 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:51.004 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:51.004 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:51.004 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:51.005 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.005 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:51.005 altname enp217s0f0np0 00:21:51.005 altname ens818f0np0 00:21:51.005 inet 192.168.100.8/24 scope global mlx_0_0 00:21:51.005 valid_lft forever preferred_lft forever 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:51.005 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.005 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:51.005 altname enp217s0f1np1 00:21:51.005 altname ens818f1np1 00:21:51.005 inet 192.168.100.9/24 scope global mlx_0_1 00:21:51.005 valid_lft forever preferred_lft forever 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.005 07:27:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:51.005 192.168.100.9' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:51.005 192.168.100.9' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:51.005 192.168.100.9' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2742297 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2742297 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:51.005 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2742297 ']' 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 [2024-07-25 07:27:22.127184] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:51.006 [2024-07-25 07:27:22.127247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.006 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.006 [2024-07-25 07:27:22.214825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.006 [2024-07-25 07:27:22.286676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.006 [2024-07-25 07:27:22.286713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.006 [2024-07-25 07:27:22.286734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.006 [2024-07-25 07:27:22.286743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.006 [2024-07-25 07:27:22.286751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.006 [2024-07-25 07:27:22.286855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.006 [2024-07-25 07:27:22.286937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.006 [2024-07-25 07:27:22.287043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.006 [2024-07-25 07:27:22.287045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.006 07:27:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 [2024-07-25 07:27:23.015143] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19440d0/0x19485c0) succeed. 00:21:51.006 [2024-07-25 07:27:23.024400] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1945710/0x1989c50) succeed. 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.006 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.006 Malloc1 00:21:51.006 [2024-07-25 07:27:23.251055] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:51.006 Malloc2 00:21:51.006 Malloc3 00:21:51.006 Malloc4 00:21:51.006 Malloc5 00:21:51.006 Malloc6 00:21:51.006 Malloc7 00:21:51.268 Malloc8 00:21:51.268 Malloc9 00:21:51.268 Malloc10 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2742614 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2742614 /var/tmp/bdevperf.sock 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2742614 ']' 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 [2024-07-25 07:27:23.735935] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:51.268 [2024-07-25 07:27:23.735989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.268 "ddgst": ${ddgst:-false} 00:21:51.268 }, 00:21:51.268 "method": "bdev_nvme_attach_controller" 00:21:51.268 } 00:21:51.268 EOF 00:21:51.268 )") 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.268 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.268 { 00:21:51.268 "params": { 00:21:51.268 "name": "Nvme$subsystem", 00:21:51.268 "trtype": "$TEST_TRANSPORT", 00:21:51.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.268 "adrfam": "ipv4", 00:21:51.268 "trsvcid": "$NVMF_PORT", 00:21:51.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.268 "hdgst": ${hdgst:-false}, 00:21:51.269 "ddgst": ${ddgst:-false} 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 } 00:21:51.269 EOF 00:21:51.269 )") 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.269 { 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme$subsystem", 00:21:51.269 "trtype": "$TEST_TRANSPORT", 00:21:51.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "$NVMF_PORT", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.269 "hdgst": ${hdgst:-false}, 00:21:51.269 "ddgst": ${ddgst:-false} 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 } 00:21:51.269 EOF 00:21:51.269 )") 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.269 { 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme$subsystem", 00:21:51.269 "trtype": "$TEST_TRANSPORT", 00:21:51.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "$NVMF_PORT", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.269 "hdgst": ${hdgst:-false}, 00:21:51.269 "ddgst": ${ddgst:-false} 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 } 00:21:51.269 EOF 00:21:51.269 )") 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.269 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:51.269 07:27:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme1", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme2", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme3", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme4", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme5", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme6", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme7", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme8", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme9", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 },{ 00:21:51.269 "params": { 00:21:51.269 "name": "Nvme10", 00:21:51.269 "trtype": "rdma", 00:21:51.269 "traddr": "192.168.100.8", 00:21:51.269 "adrfam": "ipv4", 00:21:51.269 "trsvcid": "4420", 00:21:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:51.269 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:51.269 "hdgst": false, 00:21:51.269 "ddgst": false 00:21:51.269 }, 00:21:51.269 "method": "bdev_nvme_attach_controller" 00:21:51.269 }' 00:21:51.528 [2024-07-25 07:27:23.823596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.528 [2024-07-25 07:27:23.892528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2742614 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:52.464 07:27:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:53.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2742614 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2742297 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.401 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.401 { 00:21:53.401 "params": { 00:21:53.401 "name": "Nvme$subsystem", 00:21:53.401 "trtype": "$TEST_TRANSPORT", 00:21:53.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.401 "adrfam": "ipv4", 00:21:53.401 "trsvcid": "$NVMF_PORT", 00:21:53.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.401 "hdgst": ${hdgst:-false}, 00:21:53.401 "ddgst": ${ddgst:-false} 00:21:53.401 }, 00:21:53.401 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 [2024-07-25 07:27:25.808102] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:53.402 [2024-07-25 07:27:25.808155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743088 ] 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.402 { 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme$subsystem", 00:21:53.402 "trtype": "$TEST_TRANSPORT", 00:21:53.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "$NVMF_PORT", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.402 "hdgst": ${hdgst:-false}, 00:21:53.402 "ddgst": ${ddgst:-false} 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 } 00:21:53.402 EOF 00:21:53.402 )") 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:53.402 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:53.402 07:27:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme1", 00:21:53.402 "trtype": "rdma", 00:21:53.402 "traddr": "192.168.100.8", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "4420", 00:21:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.402 "hdgst": false, 00:21:53.402 "ddgst": false 00:21:53.402 }, 00:21:53.402 "method": "bdev_nvme_attach_controller" 00:21:53.402 },{ 00:21:53.402 "params": { 00:21:53.402 "name": "Nvme2", 00:21:53.402 "trtype": "rdma", 00:21:53.402 "traddr": "192.168.100.8", 00:21:53.402 "adrfam": "ipv4", 00:21:53.402 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme3", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme4", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme5", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme6", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme7", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme8", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme9", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 },{ 00:21:53.403 "params": { 00:21:53.403 "name": "Nvme10", 00:21:53.403 "trtype": "rdma", 00:21:53.403 "traddr": "192.168.100.8", 00:21:53.403 "adrfam": "ipv4", 00:21:53.403 "trsvcid": "4420", 00:21:53.403 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:53.403 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:53.403 "hdgst": false, 00:21:53.403 "ddgst": false 00:21:53.403 }, 00:21:53.403 "method": "bdev_nvme_attach_controller" 00:21:53.403 }' 00:21:53.403 [2024-07-25 07:27:25.896424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.662 [2024-07-25 07:27:25.967453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.598 Running I/O for 1 seconds... 00:21:55.535 00:21:55.535 Latency(us) 00:21:55.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme1n1 : 1.16 386.54 24.16 0.00 0.00 163378.21 19398.66 182871.65 00:21:55.536 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme2n1 : 1.16 386.16 24.14 0.00 0.00 159504.33 21390.95 176160.77 00:21:55.536 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme3n1 : 1.16 391.82 24.49 0.00 0.00 155082.40 5740.95 168611.02 00:21:55.536 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme4n1 : 1.16 388.01 24.25 0.00 0.00 154465.39 4561.31 158544.69 00:21:55.536 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme5n1 : 1.16 385.00 24.06 0.00 0.00 154909.55 26738.69 151833.80 00:21:55.536 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme6n1 : 1.16 384.63 24.04 0.00 0.00 151516.66 19608.37 145122.92 00:21:55.536 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme7n1 : 1.17 402.31 25.14 0.00 0.00 142462.24 4928.31 130023.42 00:21:55.536 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme8n1 : 1.17 406.25 25.39 0.00 0.00 138889.26 7549.75 111568.49 00:21:55.536 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme9n1 : 1.18 435.01 27.19 0.00 0.00 130795.78 3185.05 102341.02 00:21:55.536 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.536 Verification LBA range: start 0x0 length 0x400 00:21:55.536 Nvme10n1 : 1.18 325.84 20.36 0.00 0.00 172194.27 9489.61 337222.04 00:21:55.536 =================================================================================================================== 00:21:55.536 Total : 3891.58 243.22 0.00 0.00 151618.92 3185.05 337222.04 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.794 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:55.794 rmmod nvme_rdma 00:21:55.794 rmmod nvme_fabrics 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2742297 ']' 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2742297 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2742297 ']' 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2742297 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2742297 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2742297' 00:21:56.053 killing process with pid 2742297 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2742297 00:21:56.053 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2742297 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:56.621 00:21:56.621 real 0m15.408s 00:21:56.621 user 0m31.477s 00:21:56.621 sys 0m7.708s 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.621 ************************************ 00:21:56.621 END TEST nvmf_shutdown_tc1 00:21:56.621 ************************************ 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:56.621 ************************************ 00:21:56.621 START TEST nvmf_shutdown_tc2 00:21:56.621 ************************************ 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:56.621 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:56.621 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:56.621 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:56.622 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:56.622 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:56.622 07:27:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:56.622 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:56.622 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:56.622 altname enp217s0f0np0 00:21:56.622 altname ens818f0np0 00:21:56.622 inet 192.168.100.8/24 scope global mlx_0_0 00:21:56.622 valid_lft forever preferred_lft forever 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:56.622 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:56.622 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:56.622 altname enp217s0f1np1 00:21:56.622 altname ens818f1np1 00:21:56.622 inet 192.168.100.9/24 scope global mlx_0_1 00:21:56.622 valid_lft forever preferred_lft forever 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.622 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.623 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:56.881 192.168.100.9' 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:56.881 192.168.100.9' 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:56.881 192.168.100.9' 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:21:56.881 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2743800 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2743800 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2743800 ']' 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.882 07:27:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.882 [2024-07-25 07:27:29.264115] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:56.882 [2024-07-25 07:27:29.264165] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.882 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.882 [2024-07-25 07:27:29.349376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.141 [2024-07-25 07:27:29.417903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.141 [2024-07-25 07:27:29.417944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.141 [2024-07-25 07:27:29.417953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.141 [2024-07-25 07:27:29.417962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.141 [2024-07-25 07:27:29.417968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.141 [2024-07-25 07:27:29.418080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.141 [2024-07-25 07:27:29.418146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.141 [2024-07-25 07:27:29.418240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.141 [2024-07-25 07:27:29.418241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.708 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.708 [2024-07-25 07:27:30.145755] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a760d0/0x1a7a5c0) succeed. 00:21:57.708 [2024-07-25 07:27:30.155217] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a77710/0x1abbc50) succeed. 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.967 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.967 Malloc1 00:21:57.967 [2024-07-25 07:27:30.377599] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:57.967 Malloc2 00:21:57.967 Malloc3 00:21:57.967 Malloc4 00:21:58.225 Malloc5 00:21:58.225 Malloc6 00:21:58.225 Malloc7 00:21:58.225 Malloc8 00:21:58.225 Malloc9 00:21:58.225 Malloc10 00:21:58.484 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2744118 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2744118 /var/tmp/bdevperf.sock 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2744118 ']' 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 [2024-07-25 07:27:30.869359] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:58.485 [2024-07-25 07:27:30.869413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744118 ] 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:58.485 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:58.485 07:27:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme1", 00:21:58.485 "trtype": "rdma", 00:21:58.485 "traddr": "192.168.100.8", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "4420", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.485 "hdgst": false, 00:21:58.485 "ddgst": false 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 },{ 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme2", 00:21:58.485 "trtype": "rdma", 00:21:58.485 "traddr": "192.168.100.8", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "4420", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.485 "hdgst": false, 00:21:58.485 "ddgst": false 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 },{ 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme3", 00:21:58.485 "trtype": "rdma", 00:21:58.485 "traddr": "192.168.100.8", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "4420", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:58.485 "hdgst": false, 00:21:58.485 "ddgst": false 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 },{ 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme4", 00:21:58.485 "trtype": "rdma", 00:21:58.485 "traddr": "192.168.100.8", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "4420", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:58.485 "hdgst": false, 00:21:58.485 "ddgst": false 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 },{ 00:21:58.485 "params": { 00:21:58.486 "name": "Nvme5", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 },{ 00:21:58.486 "params": { 00:21:58.486 "name": "Nvme6", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 },{ 00:21:58.486 "params": { 00:21:58.486 "name": "Nvme7", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 },{ 00:21:58.486 "params": { 00:21:58.486 "name": "Nvme8", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 },{ 00:21:58.486 "params": { 00:21:58.486 "name": "Nvme9", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 },{ 00:21:58.486 "params": { 00:21:58.486 "name": "Nvme10", 00:21:58.486 "trtype": "rdma", 00:21:58.486 "traddr": "192.168.100.8", 00:21:58.486 "adrfam": "ipv4", 00:21:58.486 "trsvcid": "4420", 00:21:58.486 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:58.486 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:58.486 "hdgst": false, 00:21:58.486 "ddgst": false 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 }' 00:21:58.486 [2024-07-25 07:27:30.956492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.744 [2024-07-25 07:27:31.026243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.682 Running I/O for 10 seconds... 00:21:59.682 07:27:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.682 07:27:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:59.682 07:27:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:59.682 07:27:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.682 07:27:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=4 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 4 -ge 100 ']' 00:21:59.682 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.941 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=148 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 148 -ge 100 ']' 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2744118 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2744118 ']' 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2744118 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2744118 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2744118' 00:22:00.200 killing process with pid 2744118 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2744118 00:22:00.200 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2744118 00:22:00.200 Received shutdown signal, test time was about 0.806850 seconds 00:22:00.200 00:22:00.200 Latency(us) 00:22:00.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.200 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme1n1 : 0.79 383.47 23.97 0.00 0.00 163439.67 6029.31 228170.14 00:22:00.200 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme2n1 : 0.79 402.95 25.18 0.00 0.00 152777.85 7707.03 160222.41 00:22:00.200 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme3n1 : 0.80 402.39 25.15 0.00 0.00 149638.92 8074.04 153511.53 00:22:00.200 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme4n1 : 0.80 409.36 25.59 0.00 0.00 144164.24 5164.24 146800.64 00:22:00.200 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme5n1 : 0.80 401.16 25.07 0.00 0.00 144584.21 8860.47 136734.31 00:22:00.200 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme6n1 : 0.80 400.61 25.04 0.00 0.00 141379.67 9175.04 130023.42 00:22:00.200 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme7n1 : 0.80 400.06 25.00 0.00 0.00 138546.54 9437.18 122473.68 00:22:00.200 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme8n1 : 0.80 402.00 25.13 0.00 0.00 134913.06 4482.66 115762.79 00:22:00.200 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme9n1 : 0.80 398.85 24.93 0.00 0.00 133414.09 10171.19 104438.17 00:22:00.200 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:00.200 Verification LBA range: start 0x0 length 0x400 00:22:00.200 Nvme10n1 : 0.81 238.15 14.88 0.00 0.00 218983.29 2451.05 382520.52 00:22:00.200 =================================================================================================================== 00:22:00.200 Total : 3839.01 239.94 0.00 0.00 149307.84 2451.05 382520.52 00:22:00.459 07:27:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:01.837 07:27:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2743800 00:22:01.837 07:27:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:01.837 07:27:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:01.837 07:27:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:01.837 rmmod nvme_rdma 00:22:01.837 rmmod nvme_fabrics 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2743800 ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2743800 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2743800 ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2743800 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2743800 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2743800' 00:22:01.837 killing process with pid 2743800 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2743800 00:22:01.837 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2743800 00:22:02.096 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:02.097 00:22:02.097 real 0m5.628s 00:22:02.097 user 0m22.511s 00:22:02.097 sys 0m1.207s 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.097 ************************************ 00:22:02.097 END TEST nvmf_shutdown_tc2 00:22:02.097 ************************************ 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.097 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:02.359 ************************************ 00:22:02.359 START TEST nvmf_shutdown_tc3 00:22:02.359 ************************************ 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:02.359 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:02.359 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.359 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:02.360 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:02.360 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:02.360 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.360 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:02.360 altname enp217s0f0np0 00:22:02.360 altname ens818f0np0 00:22:02.360 inet 192.168.100.8/24 scope global mlx_0_0 00:22:02.360 valid_lft forever preferred_lft forever 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:02.360 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.360 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:02.360 altname enp217s0f1np1 00:22:02.360 altname ens818f1np1 00:22:02.360 inet 192.168.100.9/24 scope global mlx_0_1 00:22:02.360 valid_lft forever preferred_lft forever 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.360 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:02.361 192.168.100.9' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:02.361 192.168.100.9' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:02.361 192.168.100.9' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2744799 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2744799 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2744799 ']' 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.361 07:27:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.686 [2024-07-25 07:27:34.927282] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:02.686 [2024-07-25 07:27:34.927330] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.686 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.686 [2024-07-25 07:27:35.011560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.686 [2024-07-25 07:27:35.084007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.686 [2024-07-25 07:27:35.084047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.686 [2024-07-25 07:27:35.084059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.686 [2024-07-25 07:27:35.084068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.686 [2024-07-25 07:27:35.084075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.686 [2024-07-25 07:27:35.084184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.686 [2024-07-25 07:27:35.084268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.686 [2024-07-25 07:27:35.084382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.686 [2024-07-25 07:27:35.084383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.259 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.518 [2024-07-25 07:27:35.807584] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c980d0/0x1c9c5c0) succeed. 00:22:03.518 [2024-07-25 07:27:35.816790] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c99710/0x1cddc50) succeed. 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.518 07:27:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.518 Malloc1 00:22:03.518 [2024-07-25 07:27:36.043086] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:03.777 Malloc2 00:22:03.777 Malloc3 00:22:03.777 Malloc4 00:22:03.777 Malloc5 00:22:03.777 Malloc6 00:22:03.777 Malloc7 00:22:04.036 Malloc8 00:22:04.036 Malloc9 00:22:04.036 Malloc10 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2745121 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2745121 /var/tmp/bdevperf.sock 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2745121 ']' 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.036 { 00:22:04.036 "params": { 00:22:04.036 "name": "Nvme$subsystem", 00:22:04.036 "trtype": "$TEST_TRANSPORT", 00:22:04.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.036 "adrfam": "ipv4", 00:22:04.036 "trsvcid": "$NVMF_PORT", 00:22:04.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.036 "hdgst": ${hdgst:-false}, 00:22:04.036 "ddgst": ${ddgst:-false} 00:22:04.036 }, 00:22:04.036 "method": "bdev_nvme_attach_controller" 00:22:04.036 } 00:22:04.036 EOF 00:22:04.036 )") 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.036 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.036 { 00:22:04.036 "params": { 00:22:04.036 "name": "Nvme$subsystem", 00:22:04.036 "trtype": "$TEST_TRANSPORT", 00:22:04.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.036 "adrfam": "ipv4", 00:22:04.036 "trsvcid": "$NVMF_PORT", 00:22:04.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.036 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 [2024-07-25 07:27:36.530353] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:04.037 [2024-07-25 07:27:36.530406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745121 ] 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.037 { 00:22:04.037 "params": { 00:22:04.037 "name": "Nvme$subsystem", 00:22:04.037 "trtype": "$TEST_TRANSPORT", 00:22:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.037 "adrfam": "ipv4", 00:22:04.037 "trsvcid": "$NVMF_PORT", 00:22:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.037 "hdgst": ${hdgst:-false}, 00:22:04.037 "ddgst": ${ddgst:-false} 00:22:04.037 }, 00:22:04.037 "method": "bdev_nvme_attach_controller" 00:22:04.037 } 00:22:04.037 EOF 00:22:04.037 )") 00:22:04.037 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:04.296 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:04.296 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.296 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:04.296 07:27:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:04.296 "params": { 00:22:04.296 "name": "Nvme1", 00:22:04.296 "trtype": "rdma", 00:22:04.296 "traddr": "192.168.100.8", 00:22:04.296 "adrfam": "ipv4", 00:22:04.296 "trsvcid": "4420", 00:22:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.296 "hdgst": false, 00:22:04.296 "ddgst": false 00:22:04.296 }, 00:22:04.296 "method": "bdev_nvme_attach_controller" 00:22:04.296 },{ 00:22:04.296 "params": { 00:22:04.296 "name": "Nvme2", 00:22:04.296 "trtype": "rdma", 00:22:04.296 "traddr": "192.168.100.8", 00:22:04.296 "adrfam": "ipv4", 00:22:04.296 "trsvcid": "4420", 00:22:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.296 "hdgst": false, 00:22:04.296 "ddgst": false 00:22:04.296 }, 00:22:04.296 "method": "bdev_nvme_attach_controller" 00:22:04.296 },{ 00:22:04.296 "params": { 00:22:04.296 "name": "Nvme3", 00:22:04.296 "trtype": "rdma", 00:22:04.296 "traddr": "192.168.100.8", 00:22:04.296 "adrfam": "ipv4", 00:22:04.296 "trsvcid": "4420", 00:22:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.296 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.296 "hdgst": false, 00:22:04.296 "ddgst": false 00:22:04.296 }, 00:22:04.296 "method": "bdev_nvme_attach_controller" 00:22:04.296 },{ 00:22:04.296 "params": { 00:22:04.296 "name": "Nvme4", 00:22:04.296 "trtype": "rdma", 00:22:04.296 "traddr": "192.168.100.8", 00:22:04.296 "adrfam": "ipv4", 00:22:04.296 "trsvcid": "4420", 00:22:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.296 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.296 "hdgst": false, 00:22:04.296 "ddgst": false 00:22:04.296 }, 00:22:04.296 "method": "bdev_nvme_attach_controller" 00:22:04.296 },{ 00:22:04.296 "params": { 00:22:04.296 "name": "Nvme5", 00:22:04.296 "trtype": "rdma", 00:22:04.296 "traddr": "192.168.100.8", 00:22:04.296 "adrfam": "ipv4", 00:22:04.296 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 },{ 00:22:04.297 "params": { 00:22:04.297 "name": "Nvme6", 00:22:04.297 "trtype": "rdma", 00:22:04.297 "traddr": "192.168.100.8", 00:22:04.297 "adrfam": "ipv4", 00:22:04.297 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 },{ 00:22:04.297 "params": { 00:22:04.297 "name": "Nvme7", 00:22:04.297 "trtype": "rdma", 00:22:04.297 "traddr": "192.168.100.8", 00:22:04.297 "adrfam": "ipv4", 00:22:04.297 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 },{ 00:22:04.297 "params": { 00:22:04.297 "name": "Nvme8", 00:22:04.297 "trtype": "rdma", 00:22:04.297 "traddr": "192.168.100.8", 00:22:04.297 "adrfam": "ipv4", 00:22:04.297 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 },{ 00:22:04.297 "params": { 00:22:04.297 "name": "Nvme9", 00:22:04.297 "trtype": "rdma", 00:22:04.297 "traddr": "192.168.100.8", 00:22:04.297 "adrfam": "ipv4", 00:22:04.297 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 },{ 00:22:04.297 "params": { 00:22:04.297 "name": "Nvme10", 00:22:04.297 "trtype": "rdma", 00:22:04.297 "traddr": "192.168.100.8", 00:22:04.297 "adrfam": "ipv4", 00:22:04.297 "trsvcid": "4420", 00:22:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.297 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.297 "hdgst": false, 00:22:04.297 "ddgst": false 00:22:04.297 }, 00:22:04.297 "method": "bdev_nvme_attach_controller" 00:22:04.297 }' 00:22:04.297 [2024-07-25 07:27:36.618367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.297 [2024-07-25 07:27:36.688335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.234 Running I/O for 10 seconds... 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.493 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.493 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:05.493 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:05.493 07:27:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2744799 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2744799 ']' 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2744799 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2744799 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2744799' 00:22:05.752 killing process with pid 2744799 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2744799 00:22:05.752 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2744799 00:22:06.319 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:06.319 07:27:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:06.897 [2024-07-25 07:27:39.274635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.897 [2024-07-25 07:27:39.274678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.897 [2024-07-25 07:27:39.274691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.274700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.274709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.274718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.274727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.274736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.277274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.898 [2024-07-25 07:27:39.277323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.898 [2024-07-25 07:27:39.277379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.277414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:0 sqhd:6100 p:1 m:1 dnr:0 00:22:06.898 [2024-07-25 07:27:39.277447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.277478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:0 sqhd:6100 p:1 m:1 dnr:0 00:22:06.898 [2024-07-25 07:27:39.277511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.277543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:0 sqhd:6100 p:1 m:1 dnr:0 00:22:06.898 [2024-07-25 07:27:39.277576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.277606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:0 sqhd:6100 p:1 m:1 dnr:0 00:22:06.898 [2024-07-25 07:27:39.279770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.898 [2024-07-25 07:27:39.279812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.898 [2024-07-25 07:27:39.279940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.279975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.280009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.280039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.280072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.280103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.280136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.898 [2024-07-25 07:27:39.280168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.282754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.898 [2024-07-25 07:27:39.282796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.898 [2024-07-25 07:27:39.285351] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:22:06.898 [2024-07-25 07:27:39.285394] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.287554] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:22:06.898 [2024-07-25 07:27:39.287597] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.287873] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.287894] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.287931] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.288022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.898 [2024-07-25 07:27:39.288043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.898 [2024-07-25 07:27:39.290754] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:06.898 [2024-07-25 07:27:39.290780] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:06.898 [2024-07-25 07:27:39.290790] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:22:06.898 [2024-07-25 07:27:39.291088] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:06.898 [2024-07-25 07:27:39.291104] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:06.898 [2024-07-25 07:27:39.291114] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:22:06.898 [2024-07-25 07:27:39.297897] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.307904] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.317941] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.327985] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.338014] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.898 [2024-07-25 07:27:39.341044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183c00 00:22:06.898 [2024-07-25 07:27:39.341330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x184100 00:22:06.898 [2024-07-25 07:27:39.341353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x184100 00:22:06.898 [2024-07-25 07:27:39.341373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.898 [2024-07-25 07:27:39.341385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x184100 00:22:06.899 [2024-07-25 07:27:39.341982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.341995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184400 00:22:06.899 [2024-07-25 07:27:39.342131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.899 [2024-07-25 07:27:39.342144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184400 00:22:06.900 [2024-07-25 07:27:39.342384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.342396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183c00 00:22:06.900 [2024-07-25 07:27:39.342405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.345506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.900 [2024-07-25 07:27:39.347260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.347981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.347992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183b00 00:22:06.900 [2024-07-25 07:27:39.348003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001959fd80 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182b00 00:22:06.900 [2024-07-25 07:27:39.348155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.900 [2024-07-25 07:27:39.348165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182b00 00:22:06.901 [2024-07-25 07:27:39.348327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x184500 00:22:06.901 [2024-07-25 07:27:39.348345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e5d000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e7e000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e9f000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e877000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f3000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d2000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b1000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e790000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011994000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd1f000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7a000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc59000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x183f00 00:22:06.901 [2024-07-25 07:27:39.348838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.901 [2024-07-25 07:27:39.348849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.348870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.348890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.348908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db72000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.348929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db51000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.348947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db30000 len:0x10000 key:0x183f00 00:22:06.902 [2024-07-25 07:27:39.348956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351201] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:22:06.902 [2024-07-25 07:27:39.351259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x182d00 00:22:06.902 [2024-07-25 07:27:39.351907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182e00 00:22:06.902 [2024-07-25 07:27:39.351934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182e00 00:22:06.902 [2024-07-25 07:27:39.351963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.902 [2024-07-25 07:27:39.351978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182e00 00:22:06.903 [2024-07-25 07:27:39.351990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182e00 00:22:06.903 [2024-07-25 07:27:39.352018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182c00 00:22:06.903 [2024-07-25 07:27:39.352250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010767000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010788000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf2000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec13000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec34000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecb8000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b41000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b20000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f411000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f432000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f453000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f474000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f495000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f519000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.352978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53a000 len:0x10000 key:0x183f00 00:22:06.903 [2024-07-25 07:27:39.352990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.903 [2024-07-25 07:27:39.353005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55b000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.353034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f57c000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.353061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f59d000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.353090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.353119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.353148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110af000 len:0x10000 key:0x183f00 00:22:06.904 [2024-07-25 07:27:39.353160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355597] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:22:06.904 [2024-07-25 07:27:39.355621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.355975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.355988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182f00 00:22:06.904 [2024-07-25 07:27:39.356328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x183000 00:22:06.904 [2024-07-25 07:27:39.356357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x183000 00:22:06.904 [2024-07-25 07:27:39.356385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x183000 00:22:06.904 [2024-07-25 07:27:39.356414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.904 [2024-07-25 07:27:39.356428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x183000 00:22:06.904 [2024-07-25 07:27:39.356442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x183000 00:22:06.905 [2024-07-25 07:27:39.356966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x182e00 00:22:06.905 [2024-07-25 07:27:39.356995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d30000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.357213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e17000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.357225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e38000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e59000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.363670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x183f00 00:22:06.905 [2024-07-25 07:27:39.363703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.905 [2024-07-25 07:27:39.366491] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:22:06.905 [2024-07-25 07:27:39.366594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.366980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.366995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183200 00:22:06.906 [2024-07-25 07:27:39.367446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183800 00:22:06.906 [2024-07-25 07:27:39.367716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x183100 00:22:06.906 [2024-07-25 07:27:39.367745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f0000 len:0x10000 key:0x183f00 00:22:06.906 [2024-07-25 07:27:39.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x183f00 00:22:06.906 [2024-07-25 07:27:39.367808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.906 [2024-07-25 07:27:39.367823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0fd000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0dc000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.368669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x183f00 00:22:06.907 [2024-07-25 07:27:39.368682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.371244] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:22:06.907 [2024-07-25 07:27:39.371304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183a00 00:22:06.907 [2024-07-25 07:27:39.371337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.371381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183a00 00:22:06.907 [2024-07-25 07:27:39.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.907 [2024-07-25 07:27:39.371450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183a00 00:22:06.908 [2024-07-25 07:27:39.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.371977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.371994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:22:06.908 [2024-07-25 07:27:39.372411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a43f880 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a42f800 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183800 00:22:06.908 [2024-07-25 07:27:39.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x183f00 00:22:06.908 [2024-07-25 07:27:39.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.908 [2024-07-25 07:27:39.372632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x183f00 00:22:06.908 [2024-07-25 07:27:39.372646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010137000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45b000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43a000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d419000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f8000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b6000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d395000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d374000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d353000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.372984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d332000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc93000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc72000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1a6000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d185000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d164000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d143000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d122000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d101000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0e0000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.373390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x183f00 00:22:06.909 [2024-07-25 07:27:39.373403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.375923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:22:06.909 [2024-07-25 07:27:39.375949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.375982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.375995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183300 00:22:06.909 [2024-07-25 07:27:39.376201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.909 [2024-07-25 07:27:39.376217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183300 00:22:06.910 [2024-07-25 07:27:39.376230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183d00 00:22:06.910 [2024-07-25 07:27:39.376641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183700 00:22:06.910 [2024-07-25 07:27:39.376670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4e7000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b508000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b87000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ba8000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5c6000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.376979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a5000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.376992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d563000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d542000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011808000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117e7000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117c6000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117a5000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.910 [2024-07-25 07:27:39.377270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011784000 len:0x10000 key:0x183f00 00:22:06.910 [2024-07-25 07:27:39.377283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011763000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011aff000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ade000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011abd000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a9c000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a7b000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a5a000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.377803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a39000 len:0x10000 key:0x183f00 00:22:06.911 [2024-07-25 07:27:39.377816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380053] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:22:06.911 [2024-07-25 07:27:39.380080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.911 [2024-07-25 07:27:39.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183400 00:22:06.911 [2024-07-25 07:27:39.380381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183400 00:22:06.912 [2024-07-25 07:27:39.380410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183400 00:22:06.912 [2024-07-25 07:27:39.380438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183400 00:22:06.912 [2024-07-25 07:27:39.380467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183400 00:22:06.912 [2024-07-25 07:27:39.380497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.380975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.380991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x184300 00:22:06.912 [2024-07-25 07:27:39.381416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.912 [2024-07-25 07:27:39.381432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183c00 00:22:06.913 [2024-07-25 07:27:39.381446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183d00 00:22:06.913 [2024-07-25 07:27:39.381474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012780000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127a1000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127c2000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127e3000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012867000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001292d000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.381937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x183f00 00:22:06.913 [2024-07-25 07:27:39.381950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ffc97000 sqhd:52b0 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.384846] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:22:06.913 [2024-07-25 07:27:39.385128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.385146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:11818 cdw0:1d10c530 sqhd:5ad4 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.385165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.385179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:11818 cdw0:1d10c530 sqhd:5ad4 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.385193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.385206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:11818 cdw0:1d10c530 sqhd:5ad4 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.385219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.385232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:11818 cdw0:1d10c530 sqhd:5ad4 p:0 m:0 dnr:0 00:22:06.913 [2024-07-25 07:27:39.387537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.913 [2024-07-25 07:27:39.387580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.913 [2024-07-25 07:27:39.387611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.913 [2024-07-25 07:27:39.387672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.387707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.387739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.387770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.387803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.387835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.387868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.387900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.390437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.913 [2024-07-25 07:27:39.390478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.913 [2024-07-25 07:27:39.390507] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.913 [2024-07-25 07:27:39.390555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.390588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.390621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.390690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.390729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.390742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.390758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.390771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.392987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.913 [2024-07-25 07:27:39.393028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.913 [2024-07-25 07:27:39.393058] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.913 [2024-07-25 07:27:39.393106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.393139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.393172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.393236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.913 [2024-07-25 07:27:39.393267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.913 [2024-07-25 07:27:39.393302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.393334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.395685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.914 [2024-07-25 07:27:39.395726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.914 [2024-07-25 07:27:39.395756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.914 [2024-07-25 07:27:39.395800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.395834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.395866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.395898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.395930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.395962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.395995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.396026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.398087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.914 [2024-07-25 07:27:39.398136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.914 [2024-07-25 07:27:39.398149] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.914 [2024-07-25 07:27:39.398169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.398182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.398195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.398209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.398222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.398248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.398261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.400347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.914 [2024-07-25 07:27:39.400389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.914 [2024-07-25 07:27:39.400419] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.914 [2024-07-25 07:27:39.400468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.400534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.400565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.400598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.400639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:06.914 [2024-07-25 07:27:39.400673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.914 [2024-07-25 07:27:39.400705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5105 cdw0:1d10c530 sqhd:6100 p:1 m:1 dnr:0 00:22:07.174 [2024-07-25 07:27:39.419442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:07.174 [2024-07-25 07:27:39.419496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:07.174 [2024-07-25 07:27:39.419529] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:07.174 [2024-07-25 07:27:39.419656] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.419694] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.419720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:22:07.174 [2024-07-25 07:27:39.427306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427399] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:07.174 [2024-07-25 07:27:39.427415] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:07.174 [2024-07-25 07:27:39.427427] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:07.174 [2024-07-25 07:27:39.427440] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:07.174 [2024-07-25 07:27:39.427523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:07.174 [2024-07-25 07:27:39.427559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:07.174 task offset: 24576 on job bdev=Nvme10n1 fails 00:22:07.174 00:22:07.174 Latency(us) 00:22:07.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.174 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme1n1 ended in about 1.86 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme1n1 : 1.86 128.77 8.05 34.34 0.00 390108.90 7602.18 1067030.94 00:22:07.174 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme2n1 ended in about 1.86 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme2n1 : 1.86 137.30 8.58 34.32 0.00 367593.88 8545.89 1067030.94 00:22:07.174 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme3n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme3n1 : 1.87 137.24 8.58 34.31 0.00 364806.80 14680.06 1147561.57 00:22:07.174 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme4n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme4n1 : 1.87 140.93 8.81 34.29 0.00 354249.98 4299.16 1140850.69 00:22:07.174 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme5n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme5n1 : 1.87 137.12 8.57 34.28 0.00 359000.97 29569.84 1134139.80 00:22:07.174 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme6n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme6n1 : 1.87 137.06 8.57 34.26 0.00 356118.69 31457.28 1120718.03 00:22:07.174 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme7n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme7n1 : 1.87 139.67 8.73 34.25 0.00 347972.65 4168.09 1114007.14 00:22:07.174 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme8n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme8n1 : 1.87 136.94 8.56 34.24 0.00 350489.48 48653.93 1107296.26 00:22:07.174 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme9n1 ended in about 1.87 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme9n1 : 1.87 136.88 8.56 34.22 0.00 347575.42 43411.05 1100585.37 00:22:07.174 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.174 Job: Nvme10n1 ended in about 1.79 seconds with error 00:22:07.174 Verification LBA range: start 0x0 length 0x400 00:22:07.174 Nvme10n1 : 1.79 107.49 6.72 35.83 0.00 409460.74 62914.56 1087163.60 00:22:07.174 =================================================================================================================== 00:22:07.174 Total : 1339.39 83.71 344.34 0.00 363644.06 4168.09 1147561.57 00:22:07.174 [2024-07-25 07:27:39.447692] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:07.174 [2024-07-25 07:27:39.458796] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.458823] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.458835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:22:07.174 [2024-07-25 07:27:39.458952] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.458967] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.458977] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:22:07.174 [2024-07-25 07:27:39.459065] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.459081] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.459092] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:22:07.174 [2024-07-25 07:27:39.460059] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.460077] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.460088] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:22:07.174 [2024-07-25 07:27:39.460169] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.460184] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.460194] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:22:07.174 [2024-07-25 07:27:39.460286] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.460301] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.460313] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:22:07.174 [2024-07-25 07:27:39.460423] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.174 [2024-07-25 07:27:39.460438] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.174 [2024-07-25 07:27:39.460449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2745121 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:07.434 rmmod nvme_rdma 00:22:07.434 rmmod nvme_fabrics 00:22:07.434 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2745121 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:07.434 00:22:07.434 real 0m5.180s 00:22:07.434 user 0m17.402s 00:22:07.434 sys 0m1.367s 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.434 ************************************ 00:22:07.434 END TEST nvmf_shutdown_tc3 00:22:07.434 ************************************ 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:07.434 00:22:07.434 real 0m26.617s 00:22:07.434 user 1m11.541s 00:22:07.434 sys 0m10.565s 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:07.434 ************************************ 00:22:07.434 END TEST nvmf_shutdown 00:22:07.434 ************************************ 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:07.434 00:22:07.434 real 9m13.498s 00:22:07.434 user 20m24.553s 00:22:07.434 sys 2m24.786s 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.434 07:27:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.434 ************************************ 00:22:07.434 END TEST nvmf_target_extra 00:22:07.434 ************************************ 00:22:07.434 07:27:39 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:22:07.434 07:27:39 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:07.434 07:27:39 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.434 07:27:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:07.693 ************************************ 00:22:07.693 START TEST nvmf_host 00:22:07.693 ************************************ 00:22:07.693 07:27:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:22:07.693 * Looking for test storage... 00:22:07.693 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.693 07:27:40 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.694 ************************************ 00:22:07.694 START TEST nvmf_multicontroller 00:22:07.694 ************************************ 00:22:07.694 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:07.954 * Looking for test storage... 00:22:07.954 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:07.954 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:22:07.954 00:22:07.954 real 0m0.143s 00:22:07.954 user 0m0.067s 00:22:07.954 sys 0m0.086s 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.954 ************************************ 00:22:07.954 END TEST nvmf_multicontroller 00:22:07.954 ************************************ 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.954 ************************************ 00:22:07.954 START TEST nvmf_aer 00:22:07.954 ************************************ 00:22:07.954 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:08.214 * Looking for test storage... 00:22:08.214 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:08.214 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.214 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:08.214 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.214 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.215 07:27:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.338 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:16.339 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:16.339 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:16.339 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:16.339 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:16.339 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:16.339 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:16.339 altname enp217s0f0np0 00:22:16.339 altname ens818f0np0 00:22:16.339 inet 192.168.100.8/24 scope global mlx_0_0 00:22:16.339 valid_lft forever preferred_lft forever 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:16.339 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:16.339 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:16.339 altname enp217s0f1np1 00:22:16.339 altname ens818f1np1 00:22:16.339 inet 192.168.100.9/24 scope global mlx_0_1 00:22:16.339 valid_lft forever preferred_lft forever 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:16.339 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:16.340 192.168.100.9' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:16.340 192.168.100.9' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:16.340 192.168.100.9' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2749940 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2749940 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2749940 ']' 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.340 07:27:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:16.340 [2024-07-25 07:27:48.788781] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:16.340 [2024-07-25 07:27:48.788828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.340 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.599 [2024-07-25 07:27:48.870432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.599 [2024-07-25 07:27:48.944699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.599 [2024-07-25 07:27:48.944735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.599 [2024-07-25 07:27:48.944744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.599 [2024-07-25 07:27:48.944753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.599 [2024-07-25 07:27:48.944760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.599 [2024-07-25 07:27:48.944848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.599 [2024-07-25 07:27:48.944868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.599 [2024-07-25 07:27:48.944956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.599 [2024-07-25 07:27:48.944957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.167 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.167 [2024-07-25 07:27:49.678137] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb62dd0/0xb672c0) succeed. 00:22:17.167 [2024-07-25 07:27:49.687596] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb64410/0xba8950) succeed. 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 Malloc0 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 [2024-07-25 07:27:49.853543] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 [ 00:22:17.426 { 00:22:17.426 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:17.426 "subtype": "Discovery", 00:22:17.426 "listen_addresses": [], 00:22:17.426 "allow_any_host": true, 00:22:17.426 "hosts": [] 00:22:17.426 }, 00:22:17.426 { 00:22:17.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.426 "subtype": "NVMe", 00:22:17.426 "listen_addresses": [ 00:22:17.426 { 00:22:17.426 "trtype": "RDMA", 00:22:17.426 "adrfam": "IPv4", 00:22:17.426 "traddr": "192.168.100.8", 00:22:17.426 "trsvcid": "4420" 00:22:17.426 } 00:22:17.426 ], 00:22:17.426 "allow_any_host": true, 00:22:17.426 "hosts": [], 00:22:17.426 "serial_number": "SPDK00000000000001", 00:22:17.426 "model_number": "SPDK bdev Controller", 00:22:17.426 "max_namespaces": 2, 00:22:17.426 "min_cntlid": 1, 00:22:17.426 "max_cntlid": 65519, 00:22:17.426 "namespaces": [ 00:22:17.426 { 00:22:17.426 "nsid": 1, 00:22:17.426 "bdev_name": "Malloc0", 00:22:17.426 "name": "Malloc0", 00:22:17.426 "nguid": "77B2B7E116B047FDA5898EA683CE41D2", 00:22:17.426 "uuid": "77b2b7e1-16b0-47fd-a589-8ea683ce41d2" 00:22:17.426 } 00:22:17.426 ] 00:22:17.426 } 00:22:17.426 ] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2750222 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:17.426 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:17.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.685 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.685 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:17.685 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:17.685 07:27:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 Malloc1 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 [ 00:22:17.685 { 00:22:17.685 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:17.685 "subtype": "Discovery", 00:22:17.685 "listen_addresses": [], 00:22:17.685 "allow_any_host": true, 00:22:17.685 "hosts": [] 00:22:17.685 }, 00:22:17.685 { 00:22:17.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.685 "subtype": "NVMe", 00:22:17.685 "listen_addresses": [ 00:22:17.685 { 00:22:17.685 "trtype": "RDMA", 00:22:17.685 "adrfam": "IPv4", 00:22:17.685 "traddr": "192.168.100.8", 00:22:17.685 "trsvcid": "4420" 00:22:17.685 } 00:22:17.685 ], 00:22:17.685 "allow_any_host": true, 00:22:17.685 "hosts": [], 00:22:17.685 "serial_number": "SPDK00000000000001", 00:22:17.685 "model_number": "SPDK bdev Controller", 00:22:17.685 "max_namespaces": 2, 00:22:17.685 "min_cntlid": 1, 00:22:17.685 "max_cntlid": 65519, 00:22:17.685 "namespaces": [ 00:22:17.685 { 00:22:17.685 "nsid": 1, 00:22:17.685 "bdev_name": "Malloc0", 00:22:17.685 "name": "Malloc0", 00:22:17.685 "nguid": "77B2B7E116B047FDA5898EA683CE41D2", 00:22:17.685 "uuid": "77b2b7e1-16b0-47fd-a589-8ea683ce41d2" 00:22:17.685 }, 00:22:17.685 { 00:22:17.685 "nsid": 2, 00:22:17.685 "bdev_name": "Malloc1", 00:22:17.685 "name": "Malloc1", 00:22:17.685 "nguid": "D23D1E2E9A064AD99E94112D00A9E9D6", 00:22:17.685 "uuid": "d23d1e2e-9a06-4ad9-9e94-112d00a9e9d6" 00:22:17.685 } 00:22:17.685 ] 00:22:17.685 } 00:22:17.685 ] 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.685 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2750222 00:22:17.686 Asynchronous Event Request test 00:22:17.686 Attaching to 192.168.100.8 00:22:17.686 Attached to 192.168.100.8 00:22:17.686 Registering asynchronous event callbacks... 00:22:17.686 Starting namespace attribute notice tests for all controllers... 00:22:17.686 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:17.686 aer_cb - Changed Namespace 00:22:17.686 Cleaning up... 00:22:17.686 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:17.686 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.686 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:17.945 rmmod nvme_rdma 00:22:17.945 rmmod nvme_fabrics 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2749940 ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2749940 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2749940 ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2749940 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2749940 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2749940' 00:22:17.945 killing process with pid 2749940 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2749940 00:22:17.945 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2749940 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:18.204 00:22:18.204 real 0m10.233s 00:22:18.204 user 0m8.858s 00:22:18.204 sys 0m6.864s 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.204 ************************************ 00:22:18.204 END TEST nvmf_aer 00:22:18.204 ************************************ 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.204 ************************************ 00:22:18.204 START TEST nvmf_async_init 00:22:18.204 ************************************ 00:22:18.204 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:18.463 * Looking for test storage... 00:22:18.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:18.463 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=972bb68aaf9344f0b073c9f6acf06505 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.464 07:27:50 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:26.621 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:26.621 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:26.621 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:26.621 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:26.621 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:26.622 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:26.622 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:26.622 altname enp217s0f0np0 00:22:26.622 altname ens818f0np0 00:22:26.622 inet 192.168.100.8/24 scope global mlx_0_0 00:22:26.622 valid_lft forever preferred_lft forever 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:26.622 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:26.622 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:26.622 altname enp217s0f1np1 00:22:26.622 altname ens818f1np1 00:22:26.622 inet 192.168.100.9/24 scope global mlx_0_1 00:22:26.622 valid_lft forever preferred_lft forever 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:26.622 192.168.100.9' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:26.622 192.168.100.9' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:26.622 192.168.100.9' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2754188 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2754188 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2754188 ']' 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.622 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.623 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.623 07:27:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:26.623 [2024-07-25 07:27:58.743151] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:26.623 [2024-07-25 07:27:58.743205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.623 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.623 [2024-07-25 07:27:58.827387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.623 [2024-07-25 07:27:58.899736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.623 [2024-07-25 07:27:58.899772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.623 [2024-07-25 07:27:58.899782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.623 [2024-07-25 07:27:58.899791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.623 [2024-07-25 07:27:58.899797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.623 [2024-07-25 07:27:58.899818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 [2024-07-25 07:27:59.615631] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e18b90/0x1e1d080) succeed. 00:22:27.193 [2024-07-25 07:27:59.624270] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e1a090/0x1e5e710) succeed. 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 null0 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 972bb68aaf9344f0b073c9f6acf06505 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.193 [2024-07-25 07:27:59.710603] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.193 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.452 nvme0n1 00:22:27.452 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.452 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:27.452 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.452 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.452 [ 00:22:27.452 { 00:22:27.452 "name": "nvme0n1", 00:22:27.452 "aliases": [ 00:22:27.452 "972bb68a-af93-44f0-b073-c9f6acf06505" 00:22:27.452 ], 00:22:27.452 "product_name": "NVMe disk", 00:22:27.452 "block_size": 512, 00:22:27.452 "num_blocks": 2097152, 00:22:27.453 "uuid": "972bb68a-af93-44f0-b073-c9f6acf06505", 00:22:27.453 "assigned_rate_limits": { 00:22:27.453 "rw_ios_per_sec": 0, 00:22:27.453 "rw_mbytes_per_sec": 0, 00:22:27.453 "r_mbytes_per_sec": 0, 00:22:27.453 "w_mbytes_per_sec": 0 00:22:27.453 }, 00:22:27.453 "claimed": false, 00:22:27.453 "zoned": false, 00:22:27.453 "supported_io_types": { 00:22:27.453 "read": true, 00:22:27.453 "write": true, 00:22:27.453 "unmap": false, 00:22:27.453 "flush": true, 00:22:27.453 "reset": true, 00:22:27.453 "nvme_admin": true, 00:22:27.453 "nvme_io": true, 00:22:27.453 "nvme_io_md": false, 00:22:27.453 "write_zeroes": true, 00:22:27.453 "zcopy": false, 00:22:27.453 "get_zone_info": false, 00:22:27.453 "zone_management": false, 00:22:27.453 "zone_append": false, 00:22:27.453 "compare": true, 00:22:27.453 "compare_and_write": true, 00:22:27.453 "abort": true, 00:22:27.453 "seek_hole": false, 00:22:27.453 "seek_data": false, 00:22:27.453 "copy": true, 00:22:27.453 "nvme_iov_md": false 00:22:27.453 }, 00:22:27.453 "memory_domains": [ 00:22:27.453 { 00:22:27.453 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:27.453 "dma_device_type": 0 00:22:27.453 } 00:22:27.453 ], 00:22:27.453 "driver_specific": { 00:22:27.453 "nvme": [ 00:22:27.453 { 00:22:27.453 "trid": { 00:22:27.453 "trtype": "RDMA", 00:22:27.453 "adrfam": "IPv4", 00:22:27.453 "traddr": "192.168.100.8", 00:22:27.453 "trsvcid": "4420", 00:22:27.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:27.453 }, 00:22:27.453 "ctrlr_data": { 00:22:27.453 "cntlid": 1, 00:22:27.453 "vendor_id": "0x8086", 00:22:27.453 "model_number": "SPDK bdev Controller", 00:22:27.453 "serial_number": "00000000000000000000", 00:22:27.453 "firmware_revision": "24.09", 00:22:27.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.453 "oacs": { 00:22:27.453 "security": 0, 00:22:27.453 "format": 0, 00:22:27.453 "firmware": 0, 00:22:27.453 "ns_manage": 0 00:22:27.453 }, 00:22:27.453 "multi_ctrlr": true, 00:22:27.453 "ana_reporting": false 00:22:27.453 }, 00:22:27.453 "vs": { 00:22:27.453 "nvme_version": "1.3" 00:22:27.453 }, 00:22:27.453 "ns_data": { 00:22:27.453 "id": 1, 00:22:27.453 "can_share": true 00:22:27.453 } 00:22:27.453 } 00:22:27.453 ], 00:22:27.453 "mp_policy": "active_passive" 00:22:27.453 } 00:22:27.453 } 00:22:27.453 ] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 [2024-07-25 07:27:59.834614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:27.453 [2024-07-25 07:27:59.850979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:27.453 [2024-07-25 07:27:59.875694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 [ 00:22:27.453 { 00:22:27.453 "name": "nvme0n1", 00:22:27.453 "aliases": [ 00:22:27.453 "972bb68a-af93-44f0-b073-c9f6acf06505" 00:22:27.453 ], 00:22:27.453 "product_name": "NVMe disk", 00:22:27.453 "block_size": 512, 00:22:27.453 "num_blocks": 2097152, 00:22:27.453 "uuid": "972bb68a-af93-44f0-b073-c9f6acf06505", 00:22:27.453 "assigned_rate_limits": { 00:22:27.453 "rw_ios_per_sec": 0, 00:22:27.453 "rw_mbytes_per_sec": 0, 00:22:27.453 "r_mbytes_per_sec": 0, 00:22:27.453 "w_mbytes_per_sec": 0 00:22:27.453 }, 00:22:27.453 "claimed": false, 00:22:27.453 "zoned": false, 00:22:27.453 "supported_io_types": { 00:22:27.453 "read": true, 00:22:27.453 "write": true, 00:22:27.453 "unmap": false, 00:22:27.453 "flush": true, 00:22:27.453 "reset": true, 00:22:27.453 "nvme_admin": true, 00:22:27.453 "nvme_io": true, 00:22:27.453 "nvme_io_md": false, 00:22:27.453 "write_zeroes": true, 00:22:27.453 "zcopy": false, 00:22:27.453 "get_zone_info": false, 00:22:27.453 "zone_management": false, 00:22:27.453 "zone_append": false, 00:22:27.453 "compare": true, 00:22:27.453 "compare_and_write": true, 00:22:27.453 "abort": true, 00:22:27.453 "seek_hole": false, 00:22:27.453 "seek_data": false, 00:22:27.453 "copy": true, 00:22:27.453 "nvme_iov_md": false 00:22:27.453 }, 00:22:27.453 "memory_domains": [ 00:22:27.453 { 00:22:27.453 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:27.453 "dma_device_type": 0 00:22:27.453 } 00:22:27.453 ], 00:22:27.453 "driver_specific": { 00:22:27.453 "nvme": [ 00:22:27.453 { 00:22:27.453 "trid": { 00:22:27.453 "trtype": "RDMA", 00:22:27.453 "adrfam": "IPv4", 00:22:27.453 "traddr": "192.168.100.8", 00:22:27.453 "trsvcid": "4420", 00:22:27.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:27.453 }, 00:22:27.453 "ctrlr_data": { 00:22:27.453 "cntlid": 2, 00:22:27.453 "vendor_id": "0x8086", 00:22:27.453 "model_number": "SPDK bdev Controller", 00:22:27.453 "serial_number": "00000000000000000000", 00:22:27.453 "firmware_revision": "24.09", 00:22:27.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.453 "oacs": { 00:22:27.453 "security": 0, 00:22:27.453 "format": 0, 00:22:27.453 "firmware": 0, 00:22:27.453 "ns_manage": 0 00:22:27.453 }, 00:22:27.453 "multi_ctrlr": true, 00:22:27.453 "ana_reporting": false 00:22:27.453 }, 00:22:27.453 "vs": { 00:22:27.453 "nvme_version": "1.3" 00:22:27.453 }, 00:22:27.453 "ns_data": { 00:22:27.453 "id": 1, 00:22:27.453 "can_share": true 00:22:27.453 } 00:22:27.453 } 00:22:27.453 ], 00:22:27.453 "mp_policy": "active_passive" 00:22:27.453 } 00:22:27.453 } 00:22:27.453 ] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lDBB23Sge1 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lDBB23Sge1 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 [2024-07-25 07:27:59.959004] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lDBB23Sge1 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lDBB23Sge1 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.453 07:27:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.453 [2024-07-25 07:27:59.975038] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.712 nvme0n1 00:22:27.712 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.712 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:27.712 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.712 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.712 [ 00:22:27.712 { 00:22:27.712 "name": "nvme0n1", 00:22:27.712 "aliases": [ 00:22:27.712 "972bb68a-af93-44f0-b073-c9f6acf06505" 00:22:27.712 ], 00:22:27.712 "product_name": "NVMe disk", 00:22:27.712 "block_size": 512, 00:22:27.712 "num_blocks": 2097152, 00:22:27.712 "uuid": "972bb68a-af93-44f0-b073-c9f6acf06505", 00:22:27.712 "assigned_rate_limits": { 00:22:27.712 "rw_ios_per_sec": 0, 00:22:27.712 "rw_mbytes_per_sec": 0, 00:22:27.712 "r_mbytes_per_sec": 0, 00:22:27.712 "w_mbytes_per_sec": 0 00:22:27.713 }, 00:22:27.713 "claimed": false, 00:22:27.713 "zoned": false, 00:22:27.713 "supported_io_types": { 00:22:27.713 "read": true, 00:22:27.713 "write": true, 00:22:27.713 "unmap": false, 00:22:27.713 "flush": true, 00:22:27.713 "reset": true, 00:22:27.713 "nvme_admin": true, 00:22:27.713 "nvme_io": true, 00:22:27.713 "nvme_io_md": false, 00:22:27.713 "write_zeroes": true, 00:22:27.713 "zcopy": false, 00:22:27.713 "get_zone_info": false, 00:22:27.713 "zone_management": false, 00:22:27.713 "zone_append": false, 00:22:27.713 "compare": true, 00:22:27.713 "compare_and_write": true, 00:22:27.713 "abort": true, 00:22:27.713 "seek_hole": false, 00:22:27.713 "seek_data": false, 00:22:27.713 "copy": true, 00:22:27.713 "nvme_iov_md": false 00:22:27.713 }, 00:22:27.713 "memory_domains": [ 00:22:27.713 { 00:22:27.713 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:27.713 "dma_device_type": 0 00:22:27.713 } 00:22:27.713 ], 00:22:27.713 "driver_specific": { 00:22:27.713 "nvme": [ 00:22:27.713 { 00:22:27.713 "trid": { 00:22:27.713 "trtype": "RDMA", 00:22:27.713 "adrfam": "IPv4", 00:22:27.713 "traddr": "192.168.100.8", 00:22:27.713 "trsvcid": "4421", 00:22:27.713 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:27.713 }, 00:22:27.713 "ctrlr_data": { 00:22:27.713 "cntlid": 3, 00:22:27.713 "vendor_id": "0x8086", 00:22:27.713 "model_number": "SPDK bdev Controller", 00:22:27.713 "serial_number": "00000000000000000000", 00:22:27.713 "firmware_revision": "24.09", 00:22:27.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.713 "oacs": { 00:22:27.713 "security": 0, 00:22:27.713 "format": 0, 00:22:27.713 "firmware": 0, 00:22:27.713 "ns_manage": 0 00:22:27.713 }, 00:22:27.713 "multi_ctrlr": true, 00:22:27.713 "ana_reporting": false 00:22:27.713 }, 00:22:27.713 "vs": { 00:22:27.713 "nvme_version": "1.3" 00:22:27.713 }, 00:22:27.713 "ns_data": { 00:22:27.713 "id": 1, 00:22:27.713 "can_share": true 00:22:27.713 } 00:22:27.713 } 00:22:27.713 ], 00:22:27.713 "mp_policy": "active_passive" 00:22:27.713 } 00:22:27.713 } 00:22:27.713 ] 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lDBB23Sge1 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:27.713 rmmod nvme_rdma 00:22:27.713 rmmod nvme_fabrics 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2754188 ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2754188 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2754188 ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2754188 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2754188 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2754188' 00:22:27.713 killing process with pid 2754188 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2754188 00:22:27.713 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2754188 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:27.972 00:22:27.972 real 0m9.716s 00:22:27.972 user 0m3.988s 00:22:27.972 sys 0m6.399s 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.972 ************************************ 00:22:27.972 END TEST nvmf_async_init 00:22:27.972 ************************************ 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.972 ************************************ 00:22:27.972 START TEST dma 00:22:27.972 ************************************ 00:22:27.972 07:28:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:28.232 * Looking for test storage... 00:22:28.232 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.232 07:28:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:36.354 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:36.354 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:36.354 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:36.354 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.354 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:36.355 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.355 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:36.355 altname enp217s0f0np0 00:22:36.355 altname ens818f0np0 00:22:36.355 inet 192.168.100.8/24 scope global mlx_0_0 00:22:36.355 valid_lft forever preferred_lft forever 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:36.355 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.355 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:36.355 altname enp217s0f1np1 00:22:36.355 altname ens818f1np1 00:22:36.355 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.355 valid_lft forever preferred_lft forever 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.355 192.168.100.9' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:36.355 192.168.100.9' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:36.355 192.168.100.9' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=2758541 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 2758541 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 2758541 ']' 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.355 07:28:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:36.355 [2024-07-25 07:28:08.771354] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:36.355 [2024-07-25 07:28:08.771406] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.355 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.355 [2024-07-25 07:28:08.857642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:36.615 [2024-07-25 07:28:08.927352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.615 [2024-07-25 07:28:08.927395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.615 [2024-07-25 07:28:08.927404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.615 [2024-07-25 07:28:08.927412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.615 [2024-07-25 07:28:08.927419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.615 [2024-07-25 07:28:08.927482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.615 [2024-07-25 07:28:08.927483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.183 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.184 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.184 [2024-07-25 07:28:09.643360] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12e5840/0x12e9d30) succeed. 00:22:37.184 [2024-07-25 07:28:09.652387] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12e6d40/0x132b3c0) succeed. 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.443 Malloc0 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.443 [2024-07-25 07:28:09.795834] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.443 { 00:22:37.443 "params": { 00:22:37.443 "name": "Nvme$subsystem", 00:22:37.443 "trtype": "$TEST_TRANSPORT", 00:22:37.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.443 "adrfam": "ipv4", 00:22:37.443 "trsvcid": "$NVMF_PORT", 00:22:37.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.443 "hdgst": ${hdgst:-false}, 00:22:37.443 "ddgst": ${ddgst:-false} 00:22:37.443 }, 00:22:37.443 "method": "bdev_nvme_attach_controller" 00:22:37.443 } 00:22:37.443 EOF 00:22:37.443 )") 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:22:37.443 07:28:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:37.443 "params": { 00:22:37.443 "name": "Nvme0", 00:22:37.443 "trtype": "rdma", 00:22:37.443 "traddr": "192.168.100.8", 00:22:37.443 "adrfam": "ipv4", 00:22:37.443 "trsvcid": "4420", 00:22:37.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:37.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:37.443 "hdgst": false, 00:22:37.443 "ddgst": false 00:22:37.443 }, 00:22:37.443 "method": "bdev_nvme_attach_controller" 00:22:37.443 }' 00:22:37.443 [2024-07-25 07:28:09.846344] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:37.443 [2024-07-25 07:28:09.846392] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758648 ] 00:22:37.443 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.443 [2024-07-25 07:28:09.927724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:37.703 [2024-07-25 07:28:09.999165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.703 [2024-07-25 07:28:09.999169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.976 bdev Nvme0n1 reports 1 memory domains 00:22:42.976 bdev Nvme0n1 supports RDMA memory domain 00:22:42.976 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:42.976 ========================================================================== 00:22:42.976 Latency [us] 00:22:42.976 IOPS MiB/s Average min max 00:22:42.976 Core 2: 22051.85 86.14 724.89 250.94 8447.80 00:22:42.976 Core 3: 22219.63 86.80 719.38 237.71 8627.43 00:22:42.976 ========================================================================== 00:22:42.976 Total : 44271.48 172.94 722.13 237.71 8627.43 00:22:42.976 00:22:42.976 Total operations: 221383, translate 221383 pull_push 0 memzero 0 00:22:42.976 07:28:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:22:42.976 07:28:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:22:42.976 07:28:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:22:42.977 [2024-07-25 07:28:15.444696] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:42.977 [2024-07-25 07:28:15.444755] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759708 ] 00:22:42.977 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.235 [2024-07-25 07:28:15.523987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:43.235 [2024-07-25 07:28:15.590742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.236 [2024-07-25 07:28:15.590745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.507 bdev Malloc0 reports 2 memory domains 00:22:48.507 bdev Malloc0 doesn't support RDMA memory domain 00:22:48.507 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:48.507 ========================================================================== 00:22:48.507 Latency [us] 00:22:48.507 IOPS MiB/s Average min max 00:22:48.507 Core 2: 14392.01 56.22 1110.95 381.37 1591.81 00:22:48.507 Core 3: 14554.35 56.85 1098.54 432.71 2902.16 00:22:48.507 ========================================================================== 00:22:48.507 Total : 28946.36 113.07 1104.71 381.37 2902.16 00:22:48.507 00:22:48.507 Total operations: 144784, translate 0 pull_push 579136 memzero 0 00:22:48.507 07:28:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:22:48.507 07:28:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:22:48.507 07:28:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:48.507 07:28:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:22:48.507 Ignoring -M option 00:22:48.507 [2024-07-25 07:28:20.942829] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:48.507 [2024-07-25 07:28:20.942889] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760515 ] 00:22:48.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.507 [2024-07-25 07:28:21.022532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:48.766 [2024-07-25 07:28:21.089670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.766 [2024-07-25 07:28:21.089674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.085 bdev ccf1914a-9a21-4e95-b015-1ce6c6555323 reports 1 memory domains 00:22:54.085 bdev ccf1914a-9a21-4e95-b015-1ce6c6555323 supports RDMA memory domain 00:22:54.085 Initialization complete, running randread IO for 5 sec on 2 cores 00:22:54.085 ========================================================================== 00:22:54.085 Latency [us] 00:22:54.085 IOPS MiB/s Average min max 00:22:54.085 Core 2: 76195.08 297.64 209.23 96.24 1493.78 00:22:54.085 Core 3: 78127.67 305.19 204.06 93.84 1401.48 00:22:54.085 ========================================================================== 00:22:54.085 Total : 154322.75 602.82 206.61 93.84 1493.78 00:22:54.085 00:22:54.085 Total operations: 771696, translate 0 pull_push 0 memzero 771696 00:22:54.085 07:28:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:22:54.085 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.343 [2024-07-25 07:28:26.640660] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:56.878 Initializing NVMe Controllers 00:22:56.878 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:22:56.878 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:56.878 Initialization complete. Launching workers. 00:22:56.878 ======================================================== 00:22:56.878 Latency(us) 00:22:56.878 Device Information : IOPS MiB/s Average min max 00:22:56.878 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.17 4987.80 11969.90 00:22:56.878 ======================================================== 00:22:56.878 Total : 2016.00 7.88 7972.17 4987.80 11969.90 00:22:56.878 00:22:56.878 07:28:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:22:56.878 07:28:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:22:56.878 07:28:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:56.878 07:28:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:22:56.878 [2024-07-25 07:28:28.979793] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:56.878 [2024-07-25 07:28:28.979849] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761849 ] 00:22:56.878 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.878 [2024-07-25 07:28:29.061406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.878 [2024-07-25 07:28:29.132144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.878 [2024-07-25 07:28:29.132147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.151 bdev 9dc8d067-9671-45f6-9c44-72fa8d3bc95a reports 1 memory domains 00:23:02.151 bdev 9dc8d067-9671-45f6-9c44-72fa8d3bc95a supports RDMA memory domain 00:23:02.151 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:02.151 ========================================================================== 00:23:02.151 Latency [us] 00:23:02.151 IOPS MiB/s Average min max 00:23:02.151 Core 2: 19652.66 76.77 813.43 28.55 9935.64 00:23:02.151 Core 3: 19495.28 76.15 820.03 16.81 9584.22 00:23:02.151 ========================================================================== 00:23:02.151 Total : 39147.94 152.92 816.71 16.81 9935.64 00:23:02.151 00:23:02.151 Total operations: 195769, translate 195664 pull_push 0 memzero 105 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:02.151 rmmod nvme_rdma 00:23:02.151 rmmod nvme_fabrics 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 2758541 ']' 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 2758541 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 2758541 ']' 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 2758541 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.151 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2758541 00:23:02.410 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:02.410 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:02.410 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2758541' 00:23:02.410 killing process with pid 2758541 00:23:02.410 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 2758541 00:23:02.410 07:28:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 2758541 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:02.669 00:23:02.669 real 0m34.505s 00:23:02.669 user 1m37.259s 00:23:02.669 sys 0m7.427s 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.669 ************************************ 00:23:02.669 END TEST dma 00:23:02.669 ************************************ 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.669 ************************************ 00:23:02.669 START TEST nvmf_identify 00:23:02.669 ************************************ 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:02.669 * Looking for test storage... 00:23:02.669 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:02.669 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.670 07:28:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:10.793 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.793 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:10.794 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:10.794 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:10.794 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:10.794 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:10.794 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:10.794 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:10.794 altname enp217s0f0np0 00:23:10.794 altname ens818f0np0 00:23:10.794 inet 192.168.100.8/24 scope global mlx_0_0 00:23:10.794 valid_lft forever preferred_lft forever 00:23:10.794 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:10.795 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:10.795 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:10.795 altname enp217s0f1np1 00:23:10.795 altname ens818f1np1 00:23:10.795 inet 192.168.100.9/24 scope global mlx_0_1 00:23:10.795 valid_lft forever preferred_lft forever 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:10.795 192.168.100.9' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:10.795 192.168.100.9' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:10.795 192.168.100.9' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2766844 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2766844 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2766844 ']' 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.795 07:28:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.053 [2024-07-25 07:28:43.340498] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:11.053 [2024-07-25 07:28:43.340550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.053 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.053 [2024-07-25 07:28:43.423479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.053 [2024-07-25 07:28:43.502078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.053 [2024-07-25 07:28:43.502114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.053 [2024-07-25 07:28:43.502125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.053 [2024-07-25 07:28:43.502134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.053 [2024-07-25 07:28:43.502142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.053 [2024-07-25 07:28:43.502182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.053 [2024-07-25 07:28:43.502279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.053 [2024-07-25 07:28:43.502368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.053 [2024-07-25 07:28:43.502370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.683 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.683 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:11.683 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:11.683 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-25 07:28:44.175752] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13b7dd0/0x13bc2c0) succeed. 00:23:11.683 [2024-07-25 07:28:44.185214] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b9410/0x13fd950) succeed. 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 Malloc0 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 [2024-07-25 07:28:44.395577] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 [ 00:23:11.942 { 00:23:11.942 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.942 "subtype": "Discovery", 00:23:11.942 "listen_addresses": [ 00:23:11.942 { 00:23:11.942 "trtype": "RDMA", 00:23:11.942 "adrfam": "IPv4", 00:23:11.942 "traddr": "192.168.100.8", 00:23:11.942 "trsvcid": "4420" 00:23:11.942 } 00:23:11.942 ], 00:23:11.942 "allow_any_host": true, 00:23:11.942 "hosts": [] 00:23:11.942 }, 00:23:11.942 { 00:23:11.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.942 "subtype": "NVMe", 00:23:11.942 "listen_addresses": [ 00:23:11.942 { 00:23:11.942 "trtype": "RDMA", 00:23:11.942 "adrfam": "IPv4", 00:23:11.942 "traddr": "192.168.100.8", 00:23:11.942 "trsvcid": "4420" 00:23:11.942 } 00:23:11.942 ], 00:23:11.942 "allow_any_host": true, 00:23:11.942 "hosts": [], 00:23:11.942 "serial_number": "SPDK00000000000001", 00:23:11.942 "model_number": "SPDK bdev Controller", 00:23:11.942 "max_namespaces": 32, 00:23:11.942 "min_cntlid": 1, 00:23:11.942 "max_cntlid": 65519, 00:23:11.942 "namespaces": [ 00:23:11.942 { 00:23:11.942 "nsid": 1, 00:23:11.942 "bdev_name": "Malloc0", 00:23:11.942 "name": "Malloc0", 00:23:11.942 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:11.942 "eui64": "ABCDEF0123456789", 00:23:11.942 "uuid": "853bd8aa-de79-42a5-99d3-b5b9fb02411f" 00:23:11.942 } 00:23:11.942 ] 00:23:11.942 } 00:23:11.942 ] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:11.942 [2024-07-25 07:28:44.454424] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:11.942 [2024-07-25 07:28:44.454466] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767086 ] 00:23:11.942 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.210 [2024-07-25 07:28:44.501812] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:12.210 [2024-07-25 07:28:44.501902] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:12.210 [2024-07-25 07:28:44.501916] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:12.210 [2024-07-25 07:28:44.501921] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:12.210 [2024-07-25 07:28:44.501950] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:12.210 [2024-07-25 07:28:44.513118] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:12.210 [2024-07-25 07:28:44.527181] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:12.210 [2024-07-25 07:28:44.527190] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:12.210 [2024-07-25 07:28:44.527198] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527205] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527211] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527218] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527224] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527230] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527237] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527243] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527249] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527256] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527262] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527268] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527275] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527281] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527287] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527293] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527300] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527306] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527315] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527322] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527328] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527334] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527341] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527347] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527353] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527359] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527366] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527372] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527378] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527385] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527391] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527397] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:12.210 [2024-07-25 07:28:44.527402] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:12.210 [2024-07-25 07:28:44.527407] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:12.210 [2024-07-25 07:28:44.527420] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.527433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182000 00:23:12.210 [2024-07-25 07:28:44.532630] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.210 [2024-07-25 07:28:44.532640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.210 [2024-07-25 07:28:44.532648] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532658] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:12.210 [2024-07-25 07:28:44.532665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:12.210 [2024-07-25 07:28:44.532671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:12.210 [2024-07-25 07:28:44.532686] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.210 [2024-07-25 07:28:44.532715] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.210 [2024-07-25 07:28:44.532721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:12.210 [2024-07-25 07:28:44.532728] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:12.210 [2024-07-25 07:28:44.532734] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532740] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:12.210 [2024-07-25 07:28:44.532752] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.210 [2024-07-25 07:28:44.532780] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.210 [2024-07-25 07:28:44.532785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:12.210 [2024-07-25 07:28:44.532792] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:12.210 [2024-07-25 07:28:44.532798] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:12.210 [2024-07-25 07:28:44.532813] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.210 [2024-07-25 07:28:44.532820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.210 [2024-07-25 07:28:44.532840] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.210 [2024-07-25 07:28:44.532845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.532852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:12.211 [2024-07-25 07:28:44.532858] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.532867] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.532875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.211 [2024-07-25 07:28:44.532893] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.532904] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:12.211 [2024-07-25 07:28:44.532910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:12.211 [2024-07-25 07:28:44.532916] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.532923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:12.211 [2024-07-25 07:28:44.533030] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:12.211 [2024-07-25 07:28:44.533036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:12.211 [2024-07-25 07:28:44.533048] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.211 [2024-07-25 07:28:44.533073] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:12.211 [2024-07-25 07:28:44.533093] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533101] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.211 [2024-07-25 07:28:44.533132] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533144] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:12.211 [2024-07-25 07:28:44.533150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533156] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:12.211 [2024-07-25 07:28:44.533177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533186] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:23:12.211 [2024-07-25 07:28:44.533232] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533247] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:12.211 [2024-07-25 07:28:44.533253] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:12.211 [2024-07-25 07:28:44.533259] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:12.211 [2024-07-25 07:28:44.533267] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:12.211 [2024-07-25 07:28:44.533274] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:12.211 [2024-07-25 07:28:44.533279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533285] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533300] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.211 [2024-07-25 07:28:44.533332] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533347] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.211 [2024-07-25 07:28:44.533362] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.211 [2024-07-25 07:28:44.533376] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.211 [2024-07-25 07:28:44.533390] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.211 [2024-07-25 07:28:44.533403] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533409] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533417] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:12.211 [2024-07-25 07:28:44.533425] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.211 [2024-07-25 07:28:44.533448] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533461] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:12.211 [2024-07-25 07:28:44.533467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:12.211 [2024-07-25 07:28:44.533473] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533482] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:23:12.211 [2024-07-25 07:28:44.533510] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533523] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:12.211 [2024-07-25 07:28:44.533553] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182000 00:23:12.211 [2024-07-25 07:28:44.533571] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.211 [2024-07-25 07:28:44.533593] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533609] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182000 00:23:12.211 [2024-07-25 07:28:44.533623] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533633] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.211 [2024-07-25 07:28:44.533645] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.211 [2024-07-25 07:28:44.533652] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.211 [2024-07-25 07:28:44.533657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.212 [2024-07-25 07:28:44.533667] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:23:12.212 [2024-07-25 07:28:44.533675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182000 00:23:12.212 [2024-07-25 07:28:44.533681] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.212 [2024-07-25 07:28:44.533697] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.212 [2024-07-25 07:28:44.533703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.212 [2024-07-25 07:28:44.533713] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.212 ===================================================== 00:23:12.212 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:12.212 ===================================================== 00:23:12.212 Controller Capabilities/Features 00:23:12.212 ================================ 00:23:12.212 Vendor ID: 0000 00:23:12.212 Subsystem Vendor ID: 0000 00:23:12.212 Serial Number: .................... 00:23:12.212 Model Number: ........................................ 00:23:12.212 Firmware Version: 24.09 00:23:12.212 Recommended Arb Burst: 0 00:23:12.212 IEEE OUI Identifier: 00 00 00 00:23:12.212 Multi-path I/O 00:23:12.212 May have multiple subsystem ports: No 00:23:12.212 May have multiple controllers: No 00:23:12.212 Associated with SR-IOV VF: No 00:23:12.212 Max Data Transfer Size: 131072 00:23:12.212 Max Number of Namespaces: 0 00:23:12.212 Max Number of I/O Queues: 1024 00:23:12.212 NVMe Specification Version (VS): 1.3 00:23:12.212 NVMe Specification Version (Identify): 1.3 00:23:12.212 Maximum Queue Entries: 128 00:23:12.212 Contiguous Queues Required: Yes 00:23:12.212 Arbitration Mechanisms Supported 00:23:12.212 Weighted Round Robin: Not Supported 00:23:12.212 Vendor Specific: Not Supported 00:23:12.212 Reset Timeout: 15000 ms 00:23:12.212 Doorbell Stride: 4 bytes 00:23:12.212 NVM Subsystem Reset: Not Supported 00:23:12.212 Command Sets Supported 00:23:12.212 NVM Command Set: Supported 00:23:12.212 Boot Partition: Not Supported 00:23:12.212 Memory Page Size Minimum: 4096 bytes 00:23:12.212 Memory Page Size Maximum: 4096 bytes 00:23:12.212 Persistent Memory Region: Not Supported 00:23:12.212 Optional Asynchronous Events Supported 00:23:12.212 Namespace Attribute Notices: Not Supported 00:23:12.212 Firmware Activation Notices: Not Supported 00:23:12.212 ANA Change Notices: Not Supported 00:23:12.212 PLE Aggregate Log Change Notices: Not Supported 00:23:12.212 LBA Status Info Alert Notices: Not Supported 00:23:12.212 EGE Aggregate Log Change Notices: Not Supported 00:23:12.212 Normal NVM Subsystem Shutdown event: Not Supported 00:23:12.212 Zone Descriptor Change Notices: Not Supported 00:23:12.212 Discovery Log Change Notices: Supported 00:23:12.212 Controller Attributes 00:23:12.212 128-bit Host Identifier: Not Supported 00:23:12.212 Non-Operational Permissive Mode: Not Supported 00:23:12.212 NVM Sets: Not Supported 00:23:12.212 Read Recovery Levels: Not Supported 00:23:12.212 Endurance Groups: Not Supported 00:23:12.212 Predictable Latency Mode: Not Supported 00:23:12.212 Traffic Based Keep ALive: Not Supported 00:23:12.212 Namespace Granularity: Not Supported 00:23:12.212 SQ Associations: Not Supported 00:23:12.212 UUID List: Not Supported 00:23:12.212 Multi-Domain Subsystem: Not Supported 00:23:12.212 Fixed Capacity Management: Not Supported 00:23:12.212 Variable Capacity Management: Not Supported 00:23:12.212 Delete Endurance Group: Not Supported 00:23:12.212 Delete NVM Set: Not Supported 00:23:12.212 Extended LBA Formats Supported: Not Supported 00:23:12.212 Flexible Data Placement Supported: Not Supported 00:23:12.212 00:23:12.212 Controller Memory Buffer Support 00:23:12.212 ================================ 00:23:12.212 Supported: No 00:23:12.212 00:23:12.212 Persistent Memory Region Support 00:23:12.212 ================================ 00:23:12.212 Supported: No 00:23:12.212 00:23:12.212 Admin Command Set Attributes 00:23:12.212 ============================ 00:23:12.212 Security Send/Receive: Not Supported 00:23:12.212 Format NVM: Not Supported 00:23:12.212 Firmware Activate/Download: Not Supported 00:23:12.212 Namespace Management: Not Supported 00:23:12.212 Device Self-Test: Not Supported 00:23:12.212 Directives: Not Supported 00:23:12.212 NVMe-MI: Not Supported 00:23:12.212 Virtualization Management: Not Supported 00:23:12.212 Doorbell Buffer Config: Not Supported 00:23:12.212 Get LBA Status Capability: Not Supported 00:23:12.212 Command & Feature Lockdown Capability: Not Supported 00:23:12.212 Abort Command Limit: 1 00:23:12.212 Async Event Request Limit: 4 00:23:12.212 Number of Firmware Slots: N/A 00:23:12.212 Firmware Slot 1 Read-Only: N/A 00:23:12.212 Firmware Activation Without Reset: N/A 00:23:12.212 Multiple Update Detection Support: N/A 00:23:12.212 Firmware Update Granularity: No Information Provided 00:23:12.212 Per-Namespace SMART Log: No 00:23:12.212 Asymmetric Namespace Access Log Page: Not Supported 00:23:12.212 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:12.212 Command Effects Log Page: Not Supported 00:23:12.212 Get Log Page Extended Data: Supported 00:23:12.212 Telemetry Log Pages: Not Supported 00:23:12.212 Persistent Event Log Pages: Not Supported 00:23:12.212 Supported Log Pages Log Page: May Support 00:23:12.212 Commands Supported & Effects Log Page: Not Supported 00:23:12.212 Feature Identifiers & Effects Log Page:May Support 00:23:12.212 NVMe-MI Commands & Effects Log Page: May Support 00:23:12.212 Data Area 4 for Telemetry Log: Not Supported 00:23:12.212 Error Log Page Entries Supported: 128 00:23:12.212 Keep Alive: Not Supported 00:23:12.212 00:23:12.212 NVM Command Set Attributes 00:23:12.212 ========================== 00:23:12.212 Submission Queue Entry Size 00:23:12.212 Max: 1 00:23:12.212 Min: 1 00:23:12.212 Completion Queue Entry Size 00:23:12.212 Max: 1 00:23:12.212 Min: 1 00:23:12.212 Number of Namespaces: 0 00:23:12.212 Compare Command: Not Supported 00:23:12.212 Write Uncorrectable Command: Not Supported 00:23:12.212 Dataset Management Command: Not Supported 00:23:12.212 Write Zeroes Command: Not Supported 00:23:12.212 Set Features Save Field: Not Supported 00:23:12.212 Reservations: Not Supported 00:23:12.212 Timestamp: Not Supported 00:23:12.212 Copy: Not Supported 00:23:12.212 Volatile Write Cache: Not Present 00:23:12.212 Atomic Write Unit (Normal): 1 00:23:12.212 Atomic Write Unit (PFail): 1 00:23:12.212 Atomic Compare & Write Unit: 1 00:23:12.212 Fused Compare & Write: Supported 00:23:12.212 Scatter-Gather List 00:23:12.212 SGL Command Set: Supported 00:23:12.212 SGL Keyed: Supported 00:23:12.212 SGL Bit Bucket Descriptor: Not Supported 00:23:12.212 SGL Metadata Pointer: Not Supported 00:23:12.212 Oversized SGL: Not Supported 00:23:12.212 SGL Metadata Address: Not Supported 00:23:12.212 SGL Offset: Supported 00:23:12.212 Transport SGL Data Block: Not Supported 00:23:12.212 Replay Protected Memory Block: Not Supported 00:23:12.212 00:23:12.212 Firmware Slot Information 00:23:12.212 ========================= 00:23:12.212 Active slot: 0 00:23:12.212 00:23:12.212 00:23:12.212 Error Log 00:23:12.212 ========= 00:23:12.212 00:23:12.212 Active Namespaces 00:23:12.212 ================= 00:23:12.212 Discovery Log Page 00:23:12.212 ================== 00:23:12.212 Generation Counter: 2 00:23:12.212 Number of Records: 2 00:23:12.212 Record Format: 0 00:23:12.212 00:23:12.212 Discovery Log Entry 0 00:23:12.212 ---------------------- 00:23:12.212 Transport Type: 1 (RDMA) 00:23:12.212 Address Family: 1 (IPv4) 00:23:12.212 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:12.212 Entry Flags: 00:23:12.212 Duplicate Returned Information: 1 00:23:12.212 Explicit Persistent Connection Support for Discovery: 1 00:23:12.212 Transport Requirements: 00:23:12.212 Secure Channel: Not Required 00:23:12.212 Port ID: 0 (0x0000) 00:23:12.212 Controller ID: 65535 (0xffff) 00:23:12.212 Admin Max SQ Size: 128 00:23:12.212 Transport Service Identifier: 4420 00:23:12.212 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:12.212 Transport Address: 192.168.100.8 00:23:12.212 Transport Specific Address Subtype - RDMA 00:23:12.213 RDMA QP Service Type: 1 (Reliable Connected) 00:23:12.213 RDMA Provider Type: 1 (No provider specified) 00:23:12.213 RDMA CM Service: 1 (RDMA_CM) 00:23:12.213 Discovery Log Entry 1 00:23:12.213 ---------------------- 00:23:12.213 Transport Type: 1 (RDMA) 00:23:12.213 Address Family: 1 (IPv4) 00:23:12.213 Subsystem Type: 2 (NVM Subsystem) 00:23:12.213 Entry Flags: 00:23:12.213 Duplicate Returned Information: 0 00:23:12.213 Explicit Persistent Connection Support for Discovery: 0 00:23:12.213 Transport Requirements: 00:23:12.213 Secure Channel: Not Required 00:23:12.213 Port ID: 0 (0x0000) 00:23:12.213 Controller ID: 65535 (0xffff) 00:23:12.213 Admin Max SQ Size: [2024-07-25 07:28:44.533785] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:12.213 [2024-07-25 07:28:44.533794] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 40464 doesn't match qid 00:23:12.213 [2024-07-25 07:28:44.533809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:4f40 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533816] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 40464 doesn't match qid 00:23:12.213 [2024-07-25 07:28:44.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:4f40 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533830] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 40464 doesn't match qid 00:23:12.213 [2024-07-25 07:28:44.533838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:4f40 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533845] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 40464 doesn't match qid 00:23:12.213 [2024-07-25 07:28:44.533853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:4f40 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533861] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.533869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.533888] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.533894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533904] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.533913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.533919] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.533934] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.533940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.533947] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:12.213 [2024-07-25 07:28:44.533954] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:12.213 [2024-07-25 07:28:44.533960] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.533968] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.533976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.533996] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534010] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534019] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534046] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534059] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534068] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534094] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534107] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534116] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534145] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534159] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534169] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534200] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534213] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534223] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534252] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534264] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534272] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534300] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534313] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534321] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534350] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534363] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534372] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534401] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534413] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534422] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534451] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534465] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534474] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534502] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:12.213 [2024-07-25 07:28:44.534514] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534523] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.213 [2024-07-25 07:28:44.534531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.213 [2024-07-25 07:28:44.534547] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.213 [2024-07-25 07:28:44.534552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534559] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534596] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534608] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534616] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534648] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534660] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534669] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534693] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534705] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534714] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534740] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534753] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534762] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534786] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534798] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534807] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534830] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534842] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534878] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534890] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534899] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534928] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534940] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534949] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.534973] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.534978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.534985] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.534993] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535023] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535045] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535072] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535084] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535093] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535117] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535129] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535138] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535163] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535175] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535184] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535207] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535219] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535228] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.214 [2024-07-25 07:28:44.535236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.214 [2024-07-25 07:28:44.535259] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.214 [2024-07-25 07:28:44.535265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:12.214 [2024-07-25 07:28:44.535271] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535280] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535308] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535321] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535330] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535359] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535371] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535380] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535411] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535423] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535432] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535459] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535472] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535480] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535508] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535520] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535529] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535556] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535568] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535577] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535603] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535615] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535628] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535656] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535668] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535677] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535703] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535714] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535723] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535751] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535763] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535771] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535799] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535811] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535820] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535843] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535855] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535864] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535889] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535901] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535910] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535941] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.535953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.535970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.535988] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.535993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.536000] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.536008] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.536016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.215 [2024-07-25 07:28:44.536036] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.215 [2024-07-25 07:28:44.536041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:12.215 [2024-07-25 07:28:44.536048] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.215 [2024-07-25 07:28:44.536056] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536082] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536094] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536103] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536132] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536144] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536153] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536184] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536196] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536204] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536238] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536250] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536258] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536284] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536296] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536305] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536332] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536344] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536353] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536376] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536388] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536397] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536425] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536437] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536445] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536471] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536483] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536491] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536515] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536565] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536577] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536586] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.536593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.536611] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.536617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.536623] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.540639] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.540647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.216 [2024-07-25 07:28:44.540669] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.216 [2024-07-25 07:28:44.540674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:23:12.216 [2024-07-25 07:28:44.540681] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.216 [2024-07-25 07:28:44.540688] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:12.216 128 00:23:12.216 Transport Service Identifier: 4420 00:23:12.216 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:12.216 Transport Address: 192.168.100.8 00:23:12.216 Transport Specific Address Subtype - RDMA 00:23:12.216 RDMA QP Service Type: 1 (Reliable Connected) 00:23:12.216 RDMA Provider Type: 1 (No provider specified) 00:23:12.216 RDMA CM Service: 1 (RDMA_CM) 00:23:12.216 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:12.216 [2024-07-25 07:28:44.613843] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:12.216 [2024-07-25 07:28:44.613882] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767088 ] 00:23:12.216 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.216 [2024-07-25 07:28:44.660894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:12.216 [2024-07-25 07:28:44.660967] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:12.216 [2024-07-25 07:28:44.660983] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:12.216 [2024-07-25 07:28:44.660988] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:12.216 [2024-07-25 07:28:44.661010] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:12.216 [2024-07-25 07:28:44.672171] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:12.216 [2024-07-25 07:28:44.682283] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:12.216 [2024-07-25 07:28:44.682293] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:12.216 [2024-07-25 07:28:44.682301] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682308] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682314] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682320] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682326] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682333] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682339] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682345] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682351] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682358] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682364] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682370] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682376] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682383] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682389] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682395] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682401] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682410] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682417] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682423] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682429] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682435] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682442] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682448] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682454] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682460] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682467] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682473] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682479] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682485] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682492] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682497] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:12.217 [2024-07-25 07:28:44.682503] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:12.217 [2024-07-25 07:28:44.682507] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:12.217 [2024-07-25 07:28:44.682520] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.682531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182000 00:23:12.217 [2024-07-25 07:28:44.687631] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.687639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.687646] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687654] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:12.217 [2024-07-25 07:28:44.687660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:12.217 [2024-07-25 07:28:44.687667] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:12.217 [2024-07-25 07:28:44.687680] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.687706] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.687712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.687718] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:12.217 [2024-07-25 07:28:44.687724] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687733] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:12.217 [2024-07-25 07:28:44.687742] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.687771] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.687777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.687783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:12.217 [2024-07-25 07:28:44.687790] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.687804] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.687832] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.687837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.687844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.687850] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687858] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.687882] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.687888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.687894] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:12.217 [2024-07-25 07:28:44.687900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.687906] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.687913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.688020] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:12.217 [2024-07-25 07:28:44.688025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.688038] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.688045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.688064] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.688069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.688077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:12.217 [2024-07-25 07:28:44.688083] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.688092] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.688099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.217 [2024-07-25 07:28:44.688117] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.217 [2024-07-25 07:28:44.688123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:12.217 [2024-07-25 07:28:44.688129] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:12.217 [2024-07-25 07:28:44.688135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:12.217 [2024-07-25 07:28:44.688141] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.688148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:12.217 [2024-07-25 07:28:44.688157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:12.217 [2024-07-25 07:28:44.688166] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.217 [2024-07-25 07:28:44.688174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:23:12.218 [2024-07-25 07:28:44.688211] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688225] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:12.218 [2024-07-25 07:28:44.688231] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:12.218 [2024-07-25 07:28:44.688237] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:12.218 [2024-07-25 07:28:44.688244] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:12.218 [2024-07-25 07:28:44.688250] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:12.218 [2024-07-25 07:28:44.688256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688262] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688277] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.218 [2024-07-25 07:28:44.688305] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688318] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.218 [2024-07-25 07:28:44.688335] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.218 [2024-07-25 07:28:44.688349] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.218 [2024-07-25 07:28:44.688363] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.218 [2024-07-25 07:28:44.688376] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688382] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688390] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688397] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.218 [2024-07-25 07:28:44.688425] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688437] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:12.218 [2024-07-25 07:28:44.688443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688450] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688471] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.218 [2024-07-25 07:28:44.688495] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688559] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688575] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182000 00:23:12.218 [2024-07-25 07:28:44.688609] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688633] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:12.218 [2024-07-25 07:28:44.688643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688649] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688665] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:23:12.218 [2024-07-25 07:28:44.688704] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688729] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688744] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:23:12.218 [2024-07-25 07:28:44.688776] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688796] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688825] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688838] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:12.218 [2024-07-25 07:28:44.688845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:12.218 [2024-07-25 07:28:44.688852] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:12.218 [2024-07-25 07:28:44.688866] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.218 [2024-07-25 07:28:44.688881] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.218 [2024-07-25 07:28:44.688899] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688911] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688921] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.218 [2024-07-25 07:28:44.688928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.218 [2024-07-25 07:28:44.688936] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.218 [2024-07-25 07:28:44.688942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.218 [2024-07-25 07:28:44.688948] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.688955] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.688960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.688966] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.688976] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.688983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.219 [2024-07-25 07:28:44.689000] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689012] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689021] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.219 [2024-07-25 07:28:44.689046] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689058] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689071] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182000 00:23:12.219 [2024-07-25 07:28:44.689090] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182000 00:23:12.219 [2024-07-25 07:28:44.689106] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182000 00:23:12.219 [2024-07-25 07:28:44.689122] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182000 00:23:12.219 [2024-07-25 07:28:44.689138] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689156] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689162] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689178] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689185] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689198] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.219 [2024-07-25 07:28:44.689204] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.219 [2024-07-25 07:28:44.689210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.219 [2024-07-25 07:28:44.689219] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.219 ===================================================== 00:23:12.219 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.219 ===================================================== 00:23:12.219 Controller Capabilities/Features 00:23:12.219 ================================ 00:23:12.219 Vendor ID: 8086 00:23:12.219 Subsystem Vendor ID: 8086 00:23:12.219 Serial Number: SPDK00000000000001 00:23:12.219 Model Number: SPDK bdev Controller 00:23:12.219 Firmware Version: 24.09 00:23:12.219 Recommended Arb Burst: 6 00:23:12.219 IEEE OUI Identifier: e4 d2 5c 00:23:12.219 Multi-path I/O 00:23:12.219 May have multiple subsystem ports: Yes 00:23:12.219 May have multiple controllers: Yes 00:23:12.219 Associated with SR-IOV VF: No 00:23:12.219 Max Data Transfer Size: 131072 00:23:12.219 Max Number of Namespaces: 32 00:23:12.219 Max Number of I/O Queues: 127 00:23:12.219 NVMe Specification Version (VS): 1.3 00:23:12.219 NVMe Specification Version (Identify): 1.3 00:23:12.219 Maximum Queue Entries: 128 00:23:12.219 Contiguous Queues Required: Yes 00:23:12.219 Arbitration Mechanisms Supported 00:23:12.219 Weighted Round Robin: Not Supported 00:23:12.219 Vendor Specific: Not Supported 00:23:12.219 Reset Timeout: 15000 ms 00:23:12.219 Doorbell Stride: 4 bytes 00:23:12.219 NVM Subsystem Reset: Not Supported 00:23:12.219 Command Sets Supported 00:23:12.219 NVM Command Set: Supported 00:23:12.219 Boot Partition: Not Supported 00:23:12.219 Memory Page Size Minimum: 4096 bytes 00:23:12.219 Memory Page Size Maximum: 4096 bytes 00:23:12.219 Persistent Memory Region: Not Supported 00:23:12.219 Optional Asynchronous Events Supported 00:23:12.219 Namespace Attribute Notices: Supported 00:23:12.219 Firmware Activation Notices: Not Supported 00:23:12.219 ANA Change Notices: Not Supported 00:23:12.219 PLE Aggregate Log Change Notices: Not Supported 00:23:12.219 LBA Status Info Alert Notices: Not Supported 00:23:12.219 EGE Aggregate Log Change Notices: Not Supported 00:23:12.219 Normal NVM Subsystem Shutdown event: Not Supported 00:23:12.219 Zone Descriptor Change Notices: Not Supported 00:23:12.219 Discovery Log Change Notices: Not Supported 00:23:12.219 Controller Attributes 00:23:12.219 128-bit Host Identifier: Supported 00:23:12.219 Non-Operational Permissive Mode: Not Supported 00:23:12.219 NVM Sets: Not Supported 00:23:12.219 Read Recovery Levels: Not Supported 00:23:12.219 Endurance Groups: Not Supported 00:23:12.219 Predictable Latency Mode: Not Supported 00:23:12.219 Traffic Based Keep ALive: Not Supported 00:23:12.219 Namespace Granularity: Not Supported 00:23:12.219 SQ Associations: Not Supported 00:23:12.219 UUID List: Not Supported 00:23:12.219 Multi-Domain Subsystem: Not Supported 00:23:12.219 Fixed Capacity Management: Not Supported 00:23:12.219 Variable Capacity Management: Not Supported 00:23:12.219 Delete Endurance Group: Not Supported 00:23:12.219 Delete NVM Set: Not Supported 00:23:12.219 Extended LBA Formats Supported: Not Supported 00:23:12.219 Flexible Data Placement Supported: Not Supported 00:23:12.219 00:23:12.219 Controller Memory Buffer Support 00:23:12.219 ================================ 00:23:12.219 Supported: No 00:23:12.219 00:23:12.219 Persistent Memory Region Support 00:23:12.219 ================================ 00:23:12.219 Supported: No 00:23:12.219 00:23:12.219 Admin Command Set Attributes 00:23:12.219 ============================ 00:23:12.219 Security Send/Receive: Not Supported 00:23:12.219 Format NVM: Not Supported 00:23:12.219 Firmware Activate/Download: Not Supported 00:23:12.219 Namespace Management: Not Supported 00:23:12.219 Device Self-Test: Not Supported 00:23:12.220 Directives: Not Supported 00:23:12.220 NVMe-MI: Not Supported 00:23:12.220 Virtualization Management: Not Supported 00:23:12.220 Doorbell Buffer Config: Not Supported 00:23:12.220 Get LBA Status Capability: Not Supported 00:23:12.220 Command & Feature Lockdown Capability: Not Supported 00:23:12.220 Abort Command Limit: 4 00:23:12.220 Async Event Request Limit: 4 00:23:12.220 Number of Firmware Slots: N/A 00:23:12.220 Firmware Slot 1 Read-Only: N/A 00:23:12.220 Firmware Activation Without Reset: N/A 00:23:12.220 Multiple Update Detection Support: N/A 00:23:12.220 Firmware Update Granularity: No Information Provided 00:23:12.220 Per-Namespace SMART Log: No 00:23:12.220 Asymmetric Namespace Access Log Page: Not Supported 00:23:12.220 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:12.220 Command Effects Log Page: Supported 00:23:12.220 Get Log Page Extended Data: Supported 00:23:12.220 Telemetry Log Pages: Not Supported 00:23:12.220 Persistent Event Log Pages: Not Supported 00:23:12.220 Supported Log Pages Log Page: May Support 00:23:12.220 Commands Supported & Effects Log Page: Not Supported 00:23:12.220 Feature Identifiers & Effects Log Page:May Support 00:23:12.220 NVMe-MI Commands & Effects Log Page: May Support 00:23:12.220 Data Area 4 for Telemetry Log: Not Supported 00:23:12.220 Error Log Page Entries Supported: 128 00:23:12.220 Keep Alive: Supported 00:23:12.220 Keep Alive Granularity: 10000 ms 00:23:12.220 00:23:12.220 NVM Command Set Attributes 00:23:12.220 ========================== 00:23:12.220 Submission Queue Entry Size 00:23:12.220 Max: 64 00:23:12.220 Min: 64 00:23:12.220 Completion Queue Entry Size 00:23:12.220 Max: 16 00:23:12.220 Min: 16 00:23:12.220 Number of Namespaces: 32 00:23:12.220 Compare Command: Supported 00:23:12.220 Write Uncorrectable Command: Not Supported 00:23:12.220 Dataset Management Command: Supported 00:23:12.220 Write Zeroes Command: Supported 00:23:12.220 Set Features Save Field: Not Supported 00:23:12.220 Reservations: Supported 00:23:12.220 Timestamp: Not Supported 00:23:12.220 Copy: Supported 00:23:12.220 Volatile Write Cache: Present 00:23:12.220 Atomic Write Unit (Normal): 1 00:23:12.220 Atomic Write Unit (PFail): 1 00:23:12.220 Atomic Compare & Write Unit: 1 00:23:12.220 Fused Compare & Write: Supported 00:23:12.220 Scatter-Gather List 00:23:12.220 SGL Command Set: Supported 00:23:12.220 SGL Keyed: Supported 00:23:12.220 SGL Bit Bucket Descriptor: Not Supported 00:23:12.220 SGL Metadata Pointer: Not Supported 00:23:12.220 Oversized SGL: Not Supported 00:23:12.220 SGL Metadata Address: Not Supported 00:23:12.220 SGL Offset: Supported 00:23:12.220 Transport SGL Data Block: Not Supported 00:23:12.220 Replay Protected Memory Block: Not Supported 00:23:12.220 00:23:12.220 Firmware Slot Information 00:23:12.220 ========================= 00:23:12.220 Active slot: 1 00:23:12.220 Slot 1 Firmware Revision: 24.09 00:23:12.220 00:23:12.220 00:23:12.220 Commands Supported and Effects 00:23:12.220 ============================== 00:23:12.220 Admin Commands 00:23:12.220 -------------- 00:23:12.220 Get Log Page (02h): Supported 00:23:12.220 Identify (06h): Supported 00:23:12.220 Abort (08h): Supported 00:23:12.220 Set Features (09h): Supported 00:23:12.220 Get Features (0Ah): Supported 00:23:12.220 Asynchronous Event Request (0Ch): Supported 00:23:12.220 Keep Alive (18h): Supported 00:23:12.220 I/O Commands 00:23:12.220 ------------ 00:23:12.220 Flush (00h): Supported LBA-Change 00:23:12.220 Write (01h): Supported LBA-Change 00:23:12.220 Read (02h): Supported 00:23:12.220 Compare (05h): Supported 00:23:12.220 Write Zeroes (08h): Supported LBA-Change 00:23:12.220 Dataset Management (09h): Supported LBA-Change 00:23:12.220 Copy (19h): Supported LBA-Change 00:23:12.220 00:23:12.220 Error Log 00:23:12.220 ========= 00:23:12.220 00:23:12.220 Arbitration 00:23:12.220 =========== 00:23:12.220 Arbitration Burst: 1 00:23:12.220 00:23:12.220 Power Management 00:23:12.220 ================ 00:23:12.220 Number of Power States: 1 00:23:12.220 Current Power State: Power State #0 00:23:12.220 Power State #0: 00:23:12.220 Max Power: 0.00 W 00:23:12.220 Non-Operational State: Operational 00:23:12.220 Entry Latency: Not Reported 00:23:12.220 Exit Latency: Not Reported 00:23:12.220 Relative Read Throughput: 0 00:23:12.220 Relative Read Latency: 0 00:23:12.220 Relative Write Throughput: 0 00:23:12.220 Relative Write Latency: 0 00:23:12.220 Idle Power: Not Reported 00:23:12.220 Active Power: Not Reported 00:23:12.220 Non-Operational Permissive Mode: Not Supported 00:23:12.220 00:23:12.220 Health Information 00:23:12.220 ================== 00:23:12.220 Critical Warnings: 00:23:12.220 Available Spare Space: OK 00:23:12.220 Temperature: OK 00:23:12.220 Device Reliability: OK 00:23:12.220 Read Only: No 00:23:12.220 Volatile Memory Backup: OK 00:23:12.220 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:12.220 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:12.220 Available Spare: 0% 00:23:12.220 Available Spare Threshold: 0% 00:23:12.220 Life Percentage [2024-07-25 07:28:44.689296] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.220 [2024-07-25 07:28:44.689322] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.220 [2024-07-25 07:28:44.689328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689334] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689362] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:12.220 [2024-07-25 07:28:44.689371] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 542 doesn't match qid 00:23:12.220 [2024-07-25 07:28:44.689385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:7f40 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689392] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 542 doesn't match qid 00:23:12.220 [2024-07-25 07:28:44.689403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:7f40 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689410] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 542 doesn't match qid 00:23:12.220 [2024-07-25 07:28:44.689418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:7f40 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689424] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 542 doesn't match qid 00:23:12.220 [2024-07-25 07:28:44.689432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:7f40 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689441] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.220 [2024-07-25 07:28:44.689471] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.220 [2024-07-25 07:28:44.689477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689486] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.220 [2024-07-25 07:28:44.689500] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689515] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.220 [2024-07-25 07:28:44.689520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689526] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:12.220 [2024-07-25 07:28:44.689532] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:12.220 [2024-07-25 07:28:44.689538] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689547] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.220 [2024-07-25 07:28:44.689555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.220 [2024-07-25 07:28:44.689579] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.220 [2024-07-25 07:28:44.689585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:12.220 [2024-07-25 07:28:44.689591] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689600] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689628] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689650] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689680] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689695] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689704] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689730] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689742] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689751] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689783] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689795] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689804] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689833] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689847] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689856] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689880] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689893] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689902] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689924] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689936] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689945] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.689971] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.689977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.689984] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.689993] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690023] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690072] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690084] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690093] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690118] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690130] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690139] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690163] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690175] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690184] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690237] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690268] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690280] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690320] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690332] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690341] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690365] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690377] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690385] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.221 [2024-07-25 07:28:44.690415] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.221 [2024-07-25 07:28:44.690421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:12.221 [2024-07-25 07:28:44.690427] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:23:12.221 [2024-07-25 07:28:44.690436] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690462] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690482] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690510] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690522] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690530] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690561] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690573] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690582] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690613] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690629] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690638] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690668] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690680] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690689] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690718] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690730] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690739] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690762] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690775] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690783] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690813] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690825] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690833] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690862] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690874] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690883] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690910] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690922] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690931] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.690957] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.690962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.690969] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690978] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.690985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691001] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691013] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691022] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691051] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691063] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691072] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691096] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691108] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691118] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691145] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691157] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691166] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691190] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691202] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691210] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691238] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:12.222 [2024-07-25 07:28:44.691250] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691259] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.222 [2024-07-25 07:28:44.691267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.222 [2024-07-25 07:28:44.691284] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.222 [2024-07-25 07:28:44.691290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691296] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691305] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691331] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691343] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691351] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691375] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691387] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691397] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691423] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691435] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691443] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691469] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691481] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691490] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691515] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691565] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.691577] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691586] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.691594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.691615] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.691621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.695633] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.695642] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.695650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:12.223 [2024-07-25 07:28:44.695672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:12.223 [2024-07-25 07:28:44.695678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:23:12.223 [2024-07-25 07:28:44.695686] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:23:12.223 [2024-07-25 07:28:44.695693] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:12.483 Used: 0% 00:23:12.483 Data Units Read: 0 00:23:12.483 Data Units Written: 0 00:23:12.483 Host Read Commands: 0 00:23:12.483 Host Write Commands: 0 00:23:12.483 Controller Busy Time: 0 minutes 00:23:12.483 Power Cycles: 0 00:23:12.483 Power On Hours: 0 hours 00:23:12.483 Unsafe Shutdowns: 0 00:23:12.483 Unrecoverable Media Errors: 0 00:23:12.483 Lifetime Error Log Entries: 0 00:23:12.483 Warning Temperature Time: 0 minutes 00:23:12.483 Critical Temperature Time: 0 minutes 00:23:12.483 00:23:12.483 Number of Queues 00:23:12.483 ================ 00:23:12.483 Number of I/O Submission Queues: 127 00:23:12.483 Number of I/O Completion Queues: 127 00:23:12.483 00:23:12.483 Active Namespaces 00:23:12.483 ================= 00:23:12.483 Namespace ID:1 00:23:12.483 Error Recovery Timeout: Unlimited 00:23:12.483 Command Set Identifier: NVM (00h) 00:23:12.483 Deallocate: Supported 00:23:12.483 Deallocated/Unwritten Error: Not Supported 00:23:12.483 Deallocated Read Value: Unknown 00:23:12.483 Deallocate in Write Zeroes: Not Supported 00:23:12.483 Deallocated Guard Field: 0xFFFF 00:23:12.483 Flush: Supported 00:23:12.483 Reservation: Supported 00:23:12.483 Namespace Sharing Capabilities: Multiple Controllers 00:23:12.483 Size (in LBAs): 131072 (0GiB) 00:23:12.483 Capacity (in LBAs): 131072 (0GiB) 00:23:12.483 Utilization (in LBAs): 131072 (0GiB) 00:23:12.483 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:12.483 EUI64: ABCDEF0123456789 00:23:12.483 UUID: 853bd8aa-de79-42a5-99d3-b5b9fb02411f 00:23:12.483 Thin Provisioning: Not Supported 00:23:12.483 Per-NS Atomic Units: Yes 00:23:12.483 Atomic Boundary Size (Normal): 0 00:23:12.483 Atomic Boundary Size (PFail): 0 00:23:12.483 Atomic Boundary Offset: 0 00:23:12.483 Maximum Single Source Range Length: 65535 00:23:12.483 Maximum Copy Length: 65535 00:23:12.483 Maximum Source Range Count: 1 00:23:12.483 NGUID/EUI64 Never Reused: No 00:23:12.483 Namespace Write Protected: No 00:23:12.483 Number of LBA Formats: 1 00:23:12.483 Current LBA Format: LBA Format #00 00:23:12.483 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:12.483 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:12.483 rmmod nvme_rdma 00:23:12.483 rmmod nvme_fabrics 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2766844 ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2766844 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2766844 ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2766844 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2766844 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2766844' 00:23:12.483 killing process with pid 2766844 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2766844 00:23:12.483 07:28:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2766844 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:12.743 00:23:12.743 real 0m10.058s 00:23:12.743 user 0m8.574s 00:23:12.743 sys 0m6.717s 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.743 ************************************ 00:23:12.743 END TEST nvmf_identify 00:23:12.743 ************************************ 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.743 ************************************ 00:23:12.743 START TEST nvmf_perf 00:23:12.743 ************************************ 00:23:12.743 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:13.002 * Looking for test storage... 00:23:13.002 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.002 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.003 07:28:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:21.125 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:21.125 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:21.126 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:21.126 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:21.126 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:21.126 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:21.126 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:21.126 altname enp217s0f0np0 00:23:21.126 altname ens818f0np0 00:23:21.126 inet 192.168.100.8/24 scope global mlx_0_0 00:23:21.126 valid_lft forever preferred_lft forever 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:21.126 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:21.126 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:21.126 altname enp217s0f1np1 00:23:21.126 altname ens818f1np1 00:23:21.126 inet 192.168.100.9/24 scope global mlx_0_1 00:23:21.126 valid_lft forever preferred_lft forever 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:21.126 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:21.127 192.168.100.9' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:21.127 192.168.100.9' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:21.127 192.168.100.9' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2771248 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2771248 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2771248 ']' 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.127 07:28:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.127 [2024-07-25 07:28:53.601115] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:21.127 [2024-07-25 07:28:53.601162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.127 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.385 [2024-07-25 07:28:53.684397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.385 [2024-07-25 07:28:53.758215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.385 [2024-07-25 07:28:53.758254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.385 [2024-07-25 07:28:53.758264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.385 [2024-07-25 07:28:53.758273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.385 [2024-07-25 07:28:53.758280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.385 [2024-07-25 07:28:53.758326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.385 [2024-07-25 07:28:53.758419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.385 [2024-07-25 07:28:53.758486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.385 [2024-07-25 07:28:53.758484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:21.954 07:28:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:25.244 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:25.244 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:25.244 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:23:25.244 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:25.502 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:25.502 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:23:25.502 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:25.502 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:23:25.502 07:28:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:23:25.761 [2024-07-25 07:28:58.070242] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:23:25.761 [2024-07-25 07:28:58.092012] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f4a3f0/0x1f77f00) succeed. 00:23:25.761 [2024-07-25 07:28:58.101680] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f4ba30/0x1fd7ec0) succeed. 00:23:25.761 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.019 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:26.019 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.278 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:26.278 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:26.278 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:26.537 [2024-07-25 07:28:58.921636] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:26.537 07:28:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:26.796 07:28:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:23:26.796 07:28:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:26.796 07:28:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:26.796 07:28:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:28.176 Initializing NVMe Controllers 00:23:28.176 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:23:28.176 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:23:28.176 Initialization complete. Launching workers. 00:23:28.176 ======================================================== 00:23:28.176 Latency(us) 00:23:28.176 Device Information : IOPS MiB/s Average min max 00:23:28.176 PCIE (0000:d8:00.0) NSID 1 from core 0: 101530.04 396.60 314.75 10.21 4381.67 00:23:28.176 ======================================================== 00:23:28.176 Total : 101530.04 396.60 314.75 10.21 4381.67 00:23:28.176 00:23:28.176 07:29:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:28.176 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.464 Initializing NVMe Controllers 00:23:31.464 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.464 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:31.464 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:31.464 Initialization complete. Launching workers. 00:23:31.464 ======================================================== 00:23:31.464 Latency(us) 00:23:31.464 Device Information : IOPS MiB/s Average min max 00:23:31.464 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6692.26 26.14 149.22 46.53 6010.96 00:23:31.464 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5204.54 20.33 191.18 71.88 6093.45 00:23:31.465 ======================================================== 00:23:31.465 Total : 11896.80 46.47 167.58 46.53 6093.45 00:23:31.465 00:23:31.465 07:29:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:31.465 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.750 Initializing NVMe Controllers 00:23:34.750 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.750 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:34.750 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:34.750 Initialization complete. Launching workers. 00:23:34.750 ======================================================== 00:23:34.750 Latency(us) 00:23:34.750 Device Information : IOPS MiB/s Average min max 00:23:34.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18542.28 72.43 1725.33 497.77 5523.15 00:23:34.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4029.24 15.74 8002.91 6896.30 11035.35 00:23:34.750 ======================================================== 00:23:34.750 Total : 22571.52 88.17 2845.94 497.77 11035.35 00:23:34.750 00:23:34.750 07:29:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:23:34.750 07:29:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:34.750 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.024 Initializing NVMe Controllers 00:23:40.024 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.024 Controller IO queue size 128, less than required. 00:23:40.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.024 Controller IO queue size 128, less than required. 00:23:40.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.024 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.024 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:40.024 Initialization complete. Launching workers. 00:23:40.024 ======================================================== 00:23:40.024 Latency(us) 00:23:40.024 Device Information : IOPS MiB/s Average min max 00:23:40.024 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4022.00 1005.50 32031.21 15378.94 71450.15 00:23:40.024 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4059.50 1014.87 31296.95 15418.79 48443.94 00:23:40.025 ======================================================== 00:23:40.025 Total : 8081.50 2020.37 31662.38 15378.94 71450.15 00:23:40.025 00:23:40.025 07:29:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:23:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.025 No valid NVMe controllers or AIO or URING devices found 00:23:40.025 Initializing NVMe Controllers 00:23:40.025 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.025 Controller IO queue size 128, less than required. 00:23:40.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.025 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:40.025 Controller IO queue size 128, less than required. 00:23:40.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.025 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:40.025 WARNING: Some requested NVMe devices were skipped 00:23:40.025 07:29:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:23:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.216 Initializing NVMe Controllers 00:23:44.216 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.216 Controller IO queue size 128, less than required. 00:23:44.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.216 Controller IO queue size 128, less than required. 00:23:44.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.216 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.216 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.216 Initialization complete. Launching workers. 00:23:44.216 00:23:44.216 ==================== 00:23:44.216 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:44.216 RDMA transport: 00:23:44.216 dev name: mlx5_0 00:23:44.216 polls: 413980 00:23:44.216 idle_polls: 410286 00:23:44.216 completions: 45490 00:23:44.216 queued_requests: 1 00:23:44.216 total_send_wrs: 22745 00:23:44.216 send_doorbell_updates: 3498 00:23:44.216 total_recv_wrs: 22872 00:23:44.216 recv_doorbell_updates: 3500 00:23:44.216 --------------------------------- 00:23:44.216 00:23:44.216 ==================== 00:23:44.216 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:44.216 RDMA transport: 00:23:44.216 dev name: mlx5_0 00:23:44.216 polls: 420580 00:23:44.216 idle_polls: 420313 00:23:44.216 completions: 20606 00:23:44.216 queued_requests: 1 00:23:44.216 total_send_wrs: 10303 00:23:44.216 send_doorbell_updates: 254 00:23:44.216 total_recv_wrs: 10430 00:23:44.216 recv_doorbell_updates: 255 00:23:44.216 --------------------------------- 00:23:44.216 ======================================================== 00:23:44.216 Latency(us) 00:23:44.216 Device Information : IOPS MiB/s Average min max 00:23:44.216 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5686.00 1421.50 22569.43 11086.83 55530.67 00:23:44.216 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2575.50 643.87 49790.15 25470.16 74710.70 00:23:44.216 ======================================================== 00:23:44.216 Total : 8261.50 2065.37 31055.42 11086.83 74710.70 00:23:44.216 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:44.216 rmmod nvme_rdma 00:23:44.216 rmmod nvme_fabrics 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2771248 ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2771248 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2771248 ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2771248 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.216 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2771248 00:23:44.217 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:44.217 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:44.217 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2771248' 00:23:44.217 killing process with pid 2771248 00:23:44.217 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2771248 00:23:44.217 07:29:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2771248 00:23:46.814 07:29:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.814 07:29:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:46.814 00:23:46.814 real 0m33.932s 00:23:46.814 user 1m44.089s 00:23:46.814 sys 0m7.660s 00:23:46.814 07:29:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.815 ************************************ 00:23:46.815 END TEST nvmf_perf 00:23:46.815 ************************************ 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.815 ************************************ 00:23:46.815 START TEST nvmf_fio_host 00:23:46.815 ************************************ 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:46.815 * Looking for test storage... 00:23:46.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:46.815 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.075 07:29:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:55.198 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:55.199 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:55.199 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:55.199 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:55.199 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:55.199 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:55.460 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:55.460 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:55.460 altname enp217s0f0np0 00:23:55.460 altname ens818f0np0 00:23:55.460 inet 192.168.100.8/24 scope global mlx_0_0 00:23:55.460 valid_lft forever preferred_lft forever 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:55.460 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:55.460 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:55.460 altname enp217s0f1np1 00:23:55.460 altname ens818f1np1 00:23:55.460 inet 192.168.100.9/24 scope global mlx_0_1 00:23:55.460 valid_lft forever preferred_lft forever 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:55.460 192.168.100.9' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:55.460 192.168.100.9' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:55.460 192.168.100.9' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2779627 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2779627 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2779627 ']' 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.460 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.461 07:29:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.461 [2024-07-25 07:29:27.950444] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:55.461 [2024-07-25 07:29:27.950503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.461 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.720 [2024-07-25 07:29:28.037692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.720 [2024-07-25 07:29:28.113294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.720 [2024-07-25 07:29:28.113334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.720 [2024-07-25 07:29:28.113344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.720 [2024-07-25 07:29:28.113353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.720 [2024-07-25 07:29:28.113360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.720 [2024-07-25 07:29:28.113409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.720 [2024-07-25 07:29:28.113494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.720 [2024-07-25 07:29:28.113579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.720 [2024-07-25 07:29:28.113581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.288 07:29:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.288 07:29:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:56.288 07:29:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:56.547 [2024-07-25 07:29:28.941907] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x235ddd0/0x23622c0) succeed. 00:23:56.547 [2024-07-25 07:29:28.951299] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x235f410/0x23a3950) succeed. 00:23:56.806 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:56.806 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.806 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.806 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:56.806 Malloc1 00:23:57.065 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.065 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.323 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:57.582 [2024-07-25 07:29:29.868178] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:57.582 07:29:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:57.582 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:57.870 07:29:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:58.137 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:58.137 fio-3.35 00:23:58.137 Starting 1 thread 00:23:58.137 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.667 00:24:00.667 test: (groupid=0, jobs=1): err= 0: pid=2780148: Thu Jul 25 07:29:32 2024 00:24:00.667 read: IOPS=18.0k, BW=70.3MiB/s (73.8MB/s)(141MiB/2004msec) 00:24:00.667 slat (nsec): min=1335, max=38539, avg=1468.00, stdev=446.64 00:24:00.667 clat (usec): min=1925, max=6549, avg=3526.28, stdev=75.67 00:24:00.667 lat (usec): min=1947, max=6551, avg=3527.75, stdev=75.59 00:24:00.667 clat percentiles (usec): 00:24:00.667 | 1.00th=[ 3490], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3523], 00:24:00.667 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:24:00.667 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3556], 00:24:00.667 | 99.00th=[ 3556], 99.50th=[ 3556], 99.90th=[ 3982], 99.95th=[ 5669], 00:24:00.667 | 99.99th=[ 6521] 00:24:00.667 bw ( KiB/s): min=70432, max=72888, per=100.00%, avg=72048.00, stdev=1104.14, samples=4 00:24:00.667 iops : min=17608, max=18222, avg=18012.00, stdev=276.03, samples=4 00:24:00.667 write: IOPS=18.0k, BW=70.4MiB/s (73.8MB/s)(141MiB/2004msec); 0 zone resets 00:24:00.667 slat (nsec): min=1370, max=21058, avg=1560.96, stdev=463.36 00:24:00.667 clat (usec): min=2679, max=6574, avg=3525.73, stdev=83.31 00:24:00.667 lat (usec): min=2689, max=6575, avg=3527.30, stdev=83.25 00:24:00.667 clat percentiles (usec): 00:24:00.667 | 1.00th=[ 3490], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3523], 00:24:00.667 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:24:00.667 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3556], 00:24:00.667 | 99.00th=[ 3556], 99.50th=[ 3589], 99.90th=[ 4817], 99.95th=[ 5669], 00:24:00.667 | 99.99th=[ 6587] 00:24:00.667 bw ( KiB/s): min=70408, max=72832, per=100.00%, avg=72114.00, stdev=1147.00, samples=4 00:24:00.667 iops : min=17602, max=18208, avg=18028.50, stdev=286.75, samples=4 00:24:00.667 lat (msec) : 2=0.01%, 4=99.88%, 10=0.12% 00:24:00.667 cpu : usr=99.45%, sys=0.15%, ctx=15, majf=0, minf=4 00:24:00.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:00.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.667 issued rwts: total=36083,36125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.667 00:24:00.667 Run status group 0 (all jobs): 00:24:00.667 READ: bw=70.3MiB/s (73.8MB/s), 70.3MiB/s-70.3MiB/s (73.8MB/s-73.8MB/s), io=141MiB (148MB), run=2004-2004msec 00:24:00.667 WRITE: bw=70.4MiB/s (73.8MB/s), 70.4MiB/s-70.4MiB/s (73.8MB/s-73.8MB/s), io=141MiB (148MB), run=2004-2004msec 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:00.667 07:29:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:00.667 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:00.667 fio-3.35 00:24:00.667 Starting 1 thread 00:24:00.928 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.462 00:24:03.462 test: (groupid=0, jobs=1): err= 0: pid=2780802: Thu Jul 25 07:29:35 2024 00:24:03.462 read: IOPS=14.7k, BW=229MiB/s (240MB/s)(451MiB/1967msec) 00:24:03.462 slat (usec): min=2, max=103, avg= 2.59, stdev= 1.11 00:24:03.462 clat (usec): min=479, max=8159, avg=1552.69, stdev=1220.13 00:24:03.462 lat (usec): min=481, max=8178, avg=1555.28, stdev=1220.50 00:24:03.462 clat percentiles (usec): 00:24:03.462 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 906], 00:24:03.462 | 30.00th=[ 979], 40.00th=[ 1057], 50.00th=[ 1156], 60.00th=[ 1270], 00:24:03.462 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 3261], 95.00th=[ 4817], 00:24:03.462 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7767], 00:24:03.462 | 99.99th=[ 7767] 00:24:03.462 bw ( KiB/s): min=110304, max=115328, per=48.60%, avg=114000.00, stdev=2466.63, samples=4 00:24:03.462 iops : min= 6894, max= 7208, avg=7125.00, stdev=154.16, samples=4 00:24:03.462 write: IOPS=8239, BW=129MiB/s (135MB/s)(231MiB/1797msec); 0 zone resets 00:24:03.462 slat (usec): min=26, max=123, avg=28.96, stdev= 5.14 00:24:03.462 clat (usec): min=3765, max=20820, avg=12586.38, stdev=1809.34 00:24:03.462 lat (usec): min=3794, max=20848, avg=12615.35, stdev=1808.89 00:24:03.462 clat percentiles (usec): 00:24:03.462 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:24:03.462 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:24:03.462 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14615], 95.00th=[15533], 00:24:03.462 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19792], 99.95th=[20317], 00:24:03.462 | 99.99th=[20841] 00:24:03.462 bw ( KiB/s): min=116288, max=120448, per=89.85%, avg=118448.00, stdev=2052.91, samples=4 00:24:03.462 iops : min= 7268, max= 7528, avg=7403.00, stdev=128.31, samples=4 00:24:03.462 lat (usec) : 500=0.01%, 750=2.39%, 1000=19.28% 00:24:03.462 lat (msec) : 2=36.72%, 4=2.18%, 10=7.35%, 20=32.06%, 50=0.02% 00:24:03.462 cpu : usr=96.36%, sys=1.85%, ctx=204, majf=0, minf=3 00:24:03.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:03.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.462 issued rwts: total=28839,14806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.462 00:24:03.462 Run status group 0 (all jobs): 00:24:03.462 READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=451MiB (472MB), run=1967-1967msec 00:24:03.462 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=231MiB (243MB), run=1797-1797msec 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:03.462 rmmod nvme_rdma 00:24:03.462 rmmod nvme_fabrics 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2779627 ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2779627 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2779627 ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2779627 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2779627 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2779627' 00:24:03.462 killing process with pid 2779627 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2779627 00:24:03.462 07:29:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2779627 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:03.722 00:24:03.722 real 0m16.890s 00:24:03.722 user 0m56.394s 00:24:03.722 sys 0m7.761s 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.722 ************************************ 00:24:03.722 END TEST nvmf_fio_host 00:24:03.722 ************************************ 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.722 ************************************ 00:24:03.722 START TEST nvmf_failover 00:24:03.722 ************************************ 00:24:03.722 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:03.982 * Looking for test storage... 00:24:03.982 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.982 07:29:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.102 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:12.103 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:12.103 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:12.103 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:12.103 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:12.103 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:12.103 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:12.103 altname enp217s0f0np0 00:24:12.103 altname ens818f0np0 00:24:12.103 inet 192.168.100.8/24 scope global mlx_0_0 00:24:12.103 valid_lft forever preferred_lft forever 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:12.103 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:12.104 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:12.104 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:12.104 altname enp217s0f1np1 00:24:12.104 altname ens818f1np1 00:24:12.104 inet 192.168.100.9/24 scope global mlx_0_1 00:24:12.104 valid_lft forever preferred_lft forever 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:12.104 192.168.100.9' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:12.104 192.168.100.9' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:12.104 192.168.100.9' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2785234 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2785234 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2785234 ']' 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.104 07:29:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.104 [2024-07-25 07:29:44.579584] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:12.104 [2024-07-25 07:29:44.579648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.104 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.418 [2024-07-25 07:29:44.663759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:12.418 [2024-07-25 07:29:44.732250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.418 [2024-07-25 07:29:44.732293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.418 [2024-07-25 07:29:44.732303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.418 [2024-07-25 07:29:44.732311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.418 [2024-07-25 07:29:44.732318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.418 [2024-07-25 07:29:44.732426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.418 [2024-07-25 07:29:44.732510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.418 [2024-07-25 07:29:44.732512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.986 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:13.245 [2024-07-25 07:29:45.614795] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2052500/0x20569f0) succeed. 00:24:13.245 [2024-07-25 07:29:45.623937] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2053aa0/0x2098080) succeed. 00:24:13.245 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:13.503 Malloc0 00:24:13.503 07:29:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.762 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.762 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:14.022 [2024-07-25 07:29:46.439742] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:14.022 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:14.280 [2024-07-25 07:29:46.632151] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:14.280 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:14.539 [2024-07-25 07:29:46.820814] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:14.539 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2785749 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2785749 /var/tmp/bdevperf.sock 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2785749 ']' 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.540 07:29:46 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.477 07:29:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.477 07:29:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:15.477 07:29:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.477 NVMe0n1 00:24:15.477 07:29:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.736 00:24:15.736 07:29:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2785917 00:24:15.736 07:29:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.736 07:29:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:17.114 07:29:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:17.114 07:29:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:20.406 07:29:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.406 00:24:20.406 07:29:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:20.406 07:29:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:23.693 07:29:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:23.693 [2024-07-25 07:29:56.021057] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:23.693 07:29:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:24.630 07:29:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:24.889 07:29:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2785917 00:24:31.467 0 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2785749 ']' 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2785749' 00:24:31.467 killing process with pid 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2785749 00:24:31.467 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:31.467 [2024-07-25 07:29:46.894329] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:31.467 [2024-07-25 07:29:46.894390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785749 ] 00:24:31.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.467 [2024-07-25 07:29:46.978534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.467 [2024-07-25 07:29:47.049195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.467 Running I/O for 15 seconds... 00:24:31.467 [2024-07-25 07:29:50.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.390997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.391012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.391025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.391042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.467 [2024-07-25 07:29:50.391061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.467 [2024-07-25 07:29:50.391076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.391972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.468 [2024-07-25 07:29:50.392144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.468 [2024-07-25 07:29:50.392157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.469 [2024-07-25 07:29:50.392475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.392981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.392997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.393053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.393080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.393110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.469 [2024-07-25 07:29:50.393138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181d00 00:24:31.469 [2024-07-25 07:29:50.393152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.393982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181d00 00:24:31.470 [2024-07-25 07:29:50.394188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.470 [2024-07-25 07:29:50.394203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.394435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:50.394449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.396435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:31.471 [2024-07-25 07:29:50.396460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:31.471 [2024-07-25 07:29:50.396473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27160 len:8 PRP1 0x0 PRP2 0x0 00:24:31.471 [2024-07-25 07:29:50.396491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:50.396547] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:31.471 [2024-07-25 07:29:50.396565] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:31.471 [2024-07-25 07:29:50.396580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.471 [2024-07-25 07:29:50.399730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.471 [2024-07-25 07:29:50.414189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:31.471 [2024-07-25 07:29:50.456607] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:31.471 [2024-07-25 07:29:53.832166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.471 [2024-07-25 07:29:53.832355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x181d00 00:24:31.471 [2024-07-25 07:29:53.832768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.471 [2024-07-25 07:29:53.832783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.832799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.832982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.832996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181d00 00:24:31.472 [2024-07-25 07:29:53.833396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.472 [2024-07-25 07:29:53.833613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.472 [2024-07-25 07:29:53.833630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.833864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.833892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.833920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.833949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.833977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.833993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x181d00 00:24:31.473 [2024-07-25 07:29:53.834494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.473 [2024-07-25 07:29:53.834553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.473 [2024-07-25 07:29:53.834568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.834956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.834972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.834985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.474 [2024-07-25 07:29:53.835190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x181d00 00:24:31.474 [2024-07-25 07:29:53.835594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.474 [2024-07-25 07:29:53.835609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:53.835624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:53.835657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.835842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:53.835858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.837840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:31.475 [2024-07-25 07:29:53.837862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:31.475 [2024-07-25 07:29:53.837874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122288 len:8 PRP1 0x0 PRP2 0x0 00:24:31.475 [2024-07-25 07:29:53.837889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:53.837940] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:31.475 [2024-07-25 07:29:53.837955] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:31.475 [2024-07-25 07:29:53.837969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.475 [2024-07-25 07:29:53.841071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.475 [2024-07-25 07:29:53.855299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:31.475 [2024-07-25 07:29:53.905108] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:31.475 [2024-07-25 07:29:58.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.475 [2024-07-25 07:29:58.221954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.221973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.221986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.222002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.222016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.475 [2024-07-25 07:29:58.222031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181d00 00:24:31.475 [2024-07-25 07:29:58.222045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.476 [2024-07-25 07:29:58.222876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.222981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.222994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.223012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.223025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.223042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.223055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.476 [2024-07-25 07:29:58.223070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x181d00 00:24:31.476 [2024-07-25 07:29:58.223084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.477 [2024-07-25 07:29:58.223805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181d00 00:24:31.477 [2024-07-25 07:29:58.223977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.477 [2024-07-25 07:29:58.223993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x181d00 00:24:31.478 [2024-07-25 07:29:58.224966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.224981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.224995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.225009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.478 [2024-07-25 07:29:58.225022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.478 [2024-07-25 07:29:58.225042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.479 [2024-07-25 07:29:58.225055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.479 [2024-07-25 07:29:58.225070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.479 [2024-07-25 07:29:58.225083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5bd6d000 sqhd:52b0 p:0 m:0 dnr:0 00:24:31.479 [2024-07-25 07:29:58.227217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:31.479 [2024-07-25 07:29:58.227241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:31.479 [2024-07-25 07:29:58.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102480 len:8 PRP1 0x0 PRP2 0x0 00:24:31.479 [2024-07-25 07:29:58.227270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.479 [2024-07-25 07:29:58.227320] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:31.479 [2024-07-25 07:29:58.227337] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:31.479 [2024-07-25 07:29:58.227353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.479 [2024-07-25 07:29:58.230574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.479 [2024-07-25 07:29:58.244823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:31.479 [2024-07-25 07:29:58.287502] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:31.479 00:24:31.479 Latency(us) 00:24:31.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.479 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:31.479 Verification LBA range: start 0x0 length 0x4000 00:24:31.479 NVMe0n1 : 15.00 14545.34 56.82 313.32 0.00 8590.13 335.87 1020054.73 00:24:31.479 =================================================================================================================== 00:24:31.479 Total : 14545.34 56.82 313.32 0.00 8590.13 335.87 1020054.73 00:24:31.479 Received shutdown signal, test time was about 15.000000 seconds 00:24:31.479 00:24:31.479 Latency(us) 00:24:31.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.479 =================================================================================================================== 00:24:31.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2788582 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2788582 /var/tmp/bdevperf.sock 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2788582 ']' 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.479 07:30:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 07:30:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.047 07:30:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:32.047 07:30:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:32.306 [2024-07-25 07:30:04.638496] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:32.306 07:30:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:32.306 [2024-07-25 07:30:04.819079] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:32.565 07:30:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.823 NVMe0n1 00:24:32.823 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.823 00:24:33.082 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.082 00:24:33.082 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.082 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:33.341 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.599 07:30:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:36.886 07:30:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.886 07:30:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:36.886 07:30:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2789989 00:24:36.886 07:30:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.886 07:30:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2789989 00:24:37.886 0 00:24:37.886 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.886 [2024-07-25 07:30:03.671956] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:37.886 [2024-07-25 07:30:03.672014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788582 ] 00:24:37.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.886 [2024-07-25 07:30:03.758515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.886 [2024-07-25 07:30:03.824275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.886 [2024-07-25 07:30:05.930918] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:37.886 [2024-07-25 07:30:05.931537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.886 [2024-07-25 07:30:05.931582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.886 [2024-07-25 07:30:05.956408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:37.886 [2024-07-25 07:30:05.972497] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.886 Running I/O for 1 seconds... 00:24:37.886 00:24:37.886 Latency(us) 00:24:37.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.886 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:37.886 Verification LBA range: start 0x0 length 0x4000 00:24:37.886 NVMe0n1 : 1.00 18255.61 71.31 0.00 0.00 6967.89 1081.34 13264.49 00:24:37.886 =================================================================================================================== 00:24:37.886 Total : 18255.61 71.31 0.00 0.00 6967.89 1081.34 13264.49 00:24:37.886 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.886 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:38.144 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.144 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:38.144 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.402 07:30:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.660 07:30:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2788582 ']' 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788582' 00:24:41.949 killing process with pid 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2788582 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:41.949 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:42.208 rmmod nvme_rdma 00:24:42.208 rmmod nvme_fabrics 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2785234 ']' 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2785234 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2785234 ']' 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2785234 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.208 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2785234 00:24:42.467 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:42.467 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:42.467 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2785234' 00:24:42.467 killing process with pid 2785234 00:24:42.467 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2785234 00:24:42.467 07:30:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2785234 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:42.727 00:24:42.727 real 0m38.810s 00:24:42.727 user 2m4.258s 00:24:42.727 sys 0m8.757s 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:42.727 ************************************ 00:24:42.727 END TEST nvmf_failover 00:24:42.727 ************************************ 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.727 ************************************ 00:24:42.727 START TEST nvmf_host_discovery 00:24:42.727 ************************************ 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:42.727 * Looking for test storage... 00:24:42.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.727 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:42.728 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:24:42.728 00:24:42.728 real 0m0.109s 00:24:42.728 user 0m0.038s 00:24:42.728 sys 0m0.080s 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.728 ************************************ 00:24:42.728 END TEST nvmf_host_discovery 00:24:42.728 ************************************ 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.728 07:30:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.988 ************************************ 00:24:42.988 START TEST nvmf_host_multipath_status 00:24:42.988 ************************************ 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:42.988 * Looking for test storage... 00:24:42.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.988 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.989 07:30:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:51.116 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:51.116 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:51.116 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:51.116 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:51.116 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:51.117 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.117 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:51.117 altname enp217s0f0np0 00:24:51.117 altname ens818f0np0 00:24:51.117 inet 192.168.100.8/24 scope global mlx_0_0 00:24:51.117 valid_lft forever preferred_lft forever 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:51.117 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.117 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:51.117 altname enp217s0f1np1 00:24:51.117 altname ens818f1np1 00:24:51.117 inet 192.168.100.9/24 scope global mlx_0_1 00:24:51.117 valid_lft forever preferred_lft forever 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:51.117 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:51.377 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2795094 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2795094 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2795094 ']' 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.378 07:30:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:51.378 [2024-07-25 07:30:23.775867] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:51.378 [2024-07-25 07:30:23.775922] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.378 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.378 [2024-07-25 07:30:23.858275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.638 [2024-07-25 07:30:23.926199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.638 [2024-07-25 07:30:23.926242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.638 [2024-07-25 07:30:23.926256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.638 [2024-07-25 07:30:23.926267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.638 [2024-07-25 07:30:23.926276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.638 [2024-07-25 07:30:23.926343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.638 [2024-07-25 07:30:23.926346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2795094 00:24:52.206 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:52.464 [2024-07-25 07:30:24.799600] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1496840/0x149ad30) succeed. 00:24:52.464 [2024-07-25 07:30:24.808803] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1497d40/0x14dc3c0) succeed. 00:24:52.464 07:30:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:52.723 Malloc0 00:24:52.723 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:52.723 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.981 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:53.239 [2024-07-25 07:30:25.563066] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:53.239 [2024-07-25 07:30:25.731278] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2795389 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2795389 /var/tmp/bdevperf.sock 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2795389 ']' 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.239 07:30:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:54.175 07:30:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.176 07:30:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:54.176 07:30:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:54.435 07:30:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:54.693 Nvme0n1 00:24:54.693 07:30:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:54.951 Nvme0n1 00:24:54.951 07:30:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:54.951 07:30:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:56.855 07:30:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:56.855 07:30:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:24:57.114 07:30:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:57.114 07:30:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.494 07:30:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.494 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.494 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.754 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.754 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.754 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.754 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.013 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.013 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.013 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.013 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:59.271 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:59.529 07:30:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:59.787 07:30:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.787 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.045 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.045 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.045 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.045 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.304 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.563 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.563 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:01.563 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.563 07:30:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.822 07:30:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.822 07:30:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:01.822 07:30:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:01.822 07:30:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:02.081 07:30:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:03.017 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:03.017 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.017 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.017 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:03.276 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.276 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:03.276 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.276 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.535 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.535 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:03.535 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.535 07:30:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:03.535 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.535 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:03.535 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.535 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.793 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.793 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.793 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.793 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:04.051 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:04.309 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:04.568 07:30:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:05.506 07:30:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:05.506 07:30:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.506 07:30:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.506 07:30:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.765 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.024 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.024 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.024 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.024 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.283 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.283 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.283 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.283 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:06.541 07:30:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:06.800 07:30:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:07.059 07:30:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:07.996 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:07.996 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:07.996 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.996 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.255 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.514 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.514 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.514 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.514 07:30:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.774 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.033 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.033 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:09.033 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:09.293 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:09.293 07:30:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.670 07:30:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.670 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.670 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.670 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.670 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.929 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.929 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.929 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.929 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.188 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.189 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.448 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.448 07:30:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:11.707 07:30:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:11.707 07:30:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:11.707 07:30:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:11.966 07:30:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:12.903 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:12.903 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.903 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.903 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.162 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.162 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:13.162 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.162 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.421 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.681 07:30:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.681 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.681 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.681 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.681 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.941 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.941 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.941 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.941 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.200 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.200 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:14.200 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:14.200 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:14.459 07:30:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:15.397 07:30:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:15.397 07:30:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:15.397 07:30:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.397 07:30:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.656 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.656 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:15.656 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.656 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.914 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.172 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.172 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.172 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.172 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:16.441 07:30:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:16.751 07:30:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:17.009 07:30:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:17.948 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.207 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.207 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.207 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.207 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.208 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.467 07:30:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.727 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.727 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.727 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.727 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.986 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.986 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:18.986 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:19.246 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:19.246 07:30:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:20.624 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:20.624 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.624 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.624 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.624 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.625 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:20.625 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.625 07:30:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.625 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.625 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.625 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.625 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.884 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.884 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.884 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.884 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.142 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2795389 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2795389 ']' 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2795389 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2795389 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2795389' 00:25:21.400 killing process with pid 2795389 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2795389 00:25:21.400 07:30:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2795389 00:25:21.663 Connection closed with partial response: 00:25:21.663 00:25:21.663 00:25:21.663 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2795389 00:25:21.663 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.663 [2024-07-25 07:30:25.795421] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:25:21.663 [2024-07-25 07:30:25.795476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795389 ] 00:25:21.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.663 [2024-07-25 07:30:25.874232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.663 [2024-07-25 07:30:25.944017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.663 Running I/O for 90 seconds... 00:25:21.663 [2024-07-25 07:30:39.138986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.663 [2024-07-25 07:30:39.139670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:21.663 [2024-07-25 07:30:39.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.139985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.664 [2024-07-25 07:30:39.139994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:21.664 [2024-07-25 07:30:39.140414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183100 00:25:21.664 [2024-07-25 07:30:39.140423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.665 [2024-07-25 07:30:39.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.140979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.140991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183100 00:25:21.665 [2024-07-25 07:30:39.141877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:21.665 [2024-07-25 07:30:39.141894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.141908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.141926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.141953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.141963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.141980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.141989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:39.142043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:39.142070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:39.142521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:39.142531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.702870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.702915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.702937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.702979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.702991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.702999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.703020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.703404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.703425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.703446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.703470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.666 [2024-07-25 07:30:51.703490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:21.666 [2024-07-25 07:30:51.703502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183100 00:25:21.666 [2024-07-25 07:30:51.703511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.703939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.703981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.704001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.704021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.704041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.704129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.704149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.667 [2024-07-25 07:30:51.704169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.667 [2024-07-25 07:30:51.704181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183100 00:25:21.667 [2024-07-25 07:30:51.704189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183100 00:25:21.668 [2024-07-25 07:30:51.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:21.668 [2024-07-25 07:30:51.704604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.668 [2024-07-25 07:30:51.704612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:21.668 Received shutdown signal, test time was about 26.449400 seconds 00:25:21.668 00:25:21.668 Latency(us) 00:25:21.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.668 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:21.668 Verification LBA range: start 0x0 length 0x4000 00:25:21.668 Nvme0n1 : 26.45 16143.54 63.06 0.00 0.00 7908.00 56.52 3019898.88 00:25:21.668 =================================================================================================================== 00:25:21.668 Total : 16143.54 63.06 0.00 0.00 7908.00 56.52 3019898.88 00:25:21.668 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:21.928 rmmod nvme_rdma 00:25:21.928 rmmod nvme_fabrics 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2795094 ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2795094 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2795094 ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2795094 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2795094 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2795094' 00:25:21.928 killing process with pid 2795094 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2795094 00:25:21.928 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2795094 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:22.188 00:25:22.188 real 0m39.343s 00:25:22.188 user 1m46.064s 00:25:22.188 sys 0m10.667s 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.188 ************************************ 00:25:22.188 END TEST nvmf_host_multipath_status 00:25:22.188 ************************************ 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.188 ************************************ 00:25:22.188 START TEST nvmf_discovery_remove_ifc 00:25:22.188 ************************************ 00:25:22.188 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:22.448 * Looking for test storage... 00:25:22.448 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.448 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:22.449 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:25:22.449 00:25:22.449 real 0m0.131s 00:25:22.449 user 0m0.061s 00:25:22.449 sys 0m0.079s 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.449 ************************************ 00:25:22.449 END TEST nvmf_discovery_remove_ifc 00:25:22.449 ************************************ 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.449 ************************************ 00:25:22.449 START TEST nvmf_identify_kernel_target 00:25:22.449 ************************************ 00:25:22.449 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:22.449 * Looking for test storage... 00:25:22.708 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.708 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.709 07:30:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:22.709 07:30:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.832 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:30.833 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:30.833 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:30.833 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:30.833 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:30.833 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:30.834 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:30.834 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:30.834 altname enp217s0f0np0 00:25:30.834 altname ens818f0np0 00:25:30.834 inet 192.168.100.8/24 scope global mlx_0_0 00:25:30.834 valid_lft forever preferred_lft forever 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:30.834 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:30.834 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:30.834 altname enp217s0f1np1 00:25:30.834 altname ens818f1np1 00:25:30.834 inet 192.168.100.9/24 scope global mlx_0_1 00:25:30.834 valid_lft forever preferred_lft forever 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:30.834 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:30.835 192.168.100.9' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:30.835 192.168.100.9' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:30.835 192.168.100.9' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:30.835 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:31.094 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:31.094 07:31:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:35.283 Waiting for block devices as requested 00:25:35.283 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:35.283 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:35.283 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:35.283 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:35.283 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:35.283 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:35.542 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:35.542 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:35.542 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:35.542 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:35.801 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:35.801 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:35.801 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:36.060 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:36.060 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:36.060 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:36.319 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:36.319 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:36.578 No valid GPT data, bailing 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:36.578 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:36.579 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:36.579 07:31:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:25:36.579 00:25:36.579 Discovery Log Number of Records 2, Generation counter 2 00:25:36.579 =====Discovery Log Entry 0====== 00:25:36.579 trtype: rdma 00:25:36.579 adrfam: ipv4 00:25:36.579 subtype: current discovery subsystem 00:25:36.579 treq: not specified, sq flow control disable supported 00:25:36.579 portid: 1 00:25:36.579 trsvcid: 4420 00:25:36.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:36.579 traddr: 192.168.100.8 00:25:36.579 eflags: none 00:25:36.579 rdma_prtype: not specified 00:25:36.579 rdma_qptype: connected 00:25:36.579 rdma_cms: rdma-cm 00:25:36.579 rdma_pkey: 0x0000 00:25:36.579 =====Discovery Log Entry 1====== 00:25:36.579 trtype: rdma 00:25:36.579 adrfam: ipv4 00:25:36.579 subtype: nvme subsystem 00:25:36.579 treq: not specified, sq flow control disable supported 00:25:36.579 portid: 1 00:25:36.579 trsvcid: 4420 00:25:36.579 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:36.579 traddr: 192.168.100.8 00:25:36.579 eflags: none 00:25:36.579 rdma_prtype: not specified 00:25:36.579 rdma_qptype: connected 00:25:36.579 rdma_cms: rdma-cm 00:25:36.579 rdma_pkey: 0x0000 00:25:36.579 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:36.579 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:36.839 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.839 ===================================================== 00:25:36.839 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:36.839 ===================================================== 00:25:36.839 Controller Capabilities/Features 00:25:36.839 ================================ 00:25:36.839 Vendor ID: 0000 00:25:36.839 Subsystem Vendor ID: 0000 00:25:36.839 Serial Number: ff7ab0dd1f1745f48b91 00:25:36.839 Model Number: Linux 00:25:36.839 Firmware Version: 6.7.0-68 00:25:36.839 Recommended Arb Burst: 0 00:25:36.839 IEEE OUI Identifier: 00 00 00 00:25:36.839 Multi-path I/O 00:25:36.839 May have multiple subsystem ports: No 00:25:36.839 May have multiple controllers: No 00:25:36.839 Associated with SR-IOV VF: No 00:25:36.839 Max Data Transfer Size: Unlimited 00:25:36.839 Max Number of Namespaces: 0 00:25:36.839 Max Number of I/O Queues: 1024 00:25:36.839 NVMe Specification Version (VS): 1.3 00:25:36.839 NVMe Specification Version (Identify): 1.3 00:25:36.839 Maximum Queue Entries: 128 00:25:36.839 Contiguous Queues Required: No 00:25:36.839 Arbitration Mechanisms Supported 00:25:36.839 Weighted Round Robin: Not Supported 00:25:36.839 Vendor Specific: Not Supported 00:25:36.839 Reset Timeout: 7500 ms 00:25:36.839 Doorbell Stride: 4 bytes 00:25:36.839 NVM Subsystem Reset: Not Supported 00:25:36.839 Command Sets Supported 00:25:36.839 NVM Command Set: Supported 00:25:36.839 Boot Partition: Not Supported 00:25:36.839 Memory Page Size Minimum: 4096 bytes 00:25:36.839 Memory Page Size Maximum: 4096 bytes 00:25:36.839 Persistent Memory Region: Not Supported 00:25:36.839 Optional Asynchronous Events Supported 00:25:36.839 Namespace Attribute Notices: Not Supported 00:25:36.839 Firmware Activation Notices: Not Supported 00:25:36.839 ANA Change Notices: Not Supported 00:25:36.839 PLE Aggregate Log Change Notices: Not Supported 00:25:36.839 LBA Status Info Alert Notices: Not Supported 00:25:36.839 EGE Aggregate Log Change Notices: Not Supported 00:25:36.839 Normal NVM Subsystem Shutdown event: Not Supported 00:25:36.839 Zone Descriptor Change Notices: Not Supported 00:25:36.839 Discovery Log Change Notices: Supported 00:25:36.839 Controller Attributes 00:25:36.839 128-bit Host Identifier: Not Supported 00:25:36.839 Non-Operational Permissive Mode: Not Supported 00:25:36.839 NVM Sets: Not Supported 00:25:36.839 Read Recovery Levels: Not Supported 00:25:36.839 Endurance Groups: Not Supported 00:25:36.839 Predictable Latency Mode: Not Supported 00:25:36.839 Traffic Based Keep ALive: Not Supported 00:25:36.839 Namespace Granularity: Not Supported 00:25:36.839 SQ Associations: Not Supported 00:25:36.839 UUID List: Not Supported 00:25:36.839 Multi-Domain Subsystem: Not Supported 00:25:36.839 Fixed Capacity Management: Not Supported 00:25:36.839 Variable Capacity Management: Not Supported 00:25:36.839 Delete Endurance Group: Not Supported 00:25:36.839 Delete NVM Set: Not Supported 00:25:36.839 Extended LBA Formats Supported: Not Supported 00:25:36.839 Flexible Data Placement Supported: Not Supported 00:25:36.839 00:25:36.839 Controller Memory Buffer Support 00:25:36.839 ================================ 00:25:36.839 Supported: No 00:25:36.839 00:25:36.839 Persistent Memory Region Support 00:25:36.839 ================================ 00:25:36.839 Supported: No 00:25:36.839 00:25:36.839 Admin Command Set Attributes 00:25:36.839 ============================ 00:25:36.839 Security Send/Receive: Not Supported 00:25:36.839 Format NVM: Not Supported 00:25:36.839 Firmware Activate/Download: Not Supported 00:25:36.839 Namespace Management: Not Supported 00:25:36.839 Device Self-Test: Not Supported 00:25:36.839 Directives: Not Supported 00:25:36.839 NVMe-MI: Not Supported 00:25:36.839 Virtualization Management: Not Supported 00:25:36.839 Doorbell Buffer Config: Not Supported 00:25:36.839 Get LBA Status Capability: Not Supported 00:25:36.839 Command & Feature Lockdown Capability: Not Supported 00:25:36.839 Abort Command Limit: 1 00:25:36.839 Async Event Request Limit: 1 00:25:36.839 Number of Firmware Slots: N/A 00:25:36.839 Firmware Slot 1 Read-Only: N/A 00:25:36.839 Firmware Activation Without Reset: N/A 00:25:36.839 Multiple Update Detection Support: N/A 00:25:36.839 Firmware Update Granularity: No Information Provided 00:25:36.839 Per-Namespace SMART Log: No 00:25:36.839 Asymmetric Namespace Access Log Page: Not Supported 00:25:36.839 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:36.839 Command Effects Log Page: Not Supported 00:25:36.839 Get Log Page Extended Data: Supported 00:25:36.839 Telemetry Log Pages: Not Supported 00:25:36.839 Persistent Event Log Pages: Not Supported 00:25:36.840 Supported Log Pages Log Page: May Support 00:25:36.840 Commands Supported & Effects Log Page: Not Supported 00:25:36.840 Feature Identifiers & Effects Log Page:May Support 00:25:36.840 NVMe-MI Commands & Effects Log Page: May Support 00:25:36.840 Data Area 4 for Telemetry Log: Not Supported 00:25:36.840 Error Log Page Entries Supported: 1 00:25:36.840 Keep Alive: Not Supported 00:25:36.840 00:25:36.840 NVM Command Set Attributes 00:25:36.840 ========================== 00:25:36.840 Submission Queue Entry Size 00:25:36.840 Max: 1 00:25:36.840 Min: 1 00:25:36.840 Completion Queue Entry Size 00:25:36.840 Max: 1 00:25:36.840 Min: 1 00:25:36.840 Number of Namespaces: 0 00:25:36.840 Compare Command: Not Supported 00:25:36.840 Write Uncorrectable Command: Not Supported 00:25:36.840 Dataset Management Command: Not Supported 00:25:36.840 Write Zeroes Command: Not Supported 00:25:36.840 Set Features Save Field: Not Supported 00:25:36.840 Reservations: Not Supported 00:25:36.840 Timestamp: Not Supported 00:25:36.840 Copy: Not Supported 00:25:36.840 Volatile Write Cache: Not Present 00:25:36.840 Atomic Write Unit (Normal): 1 00:25:36.840 Atomic Write Unit (PFail): 1 00:25:36.840 Atomic Compare & Write Unit: 1 00:25:36.840 Fused Compare & Write: Not Supported 00:25:36.840 Scatter-Gather List 00:25:36.840 SGL Command Set: Supported 00:25:36.840 SGL Keyed: Supported 00:25:36.840 SGL Bit Bucket Descriptor: Not Supported 00:25:36.840 SGL Metadata Pointer: Not Supported 00:25:36.840 Oversized SGL: Not Supported 00:25:36.840 SGL Metadata Address: Not Supported 00:25:36.840 SGL Offset: Supported 00:25:36.840 Transport SGL Data Block: Not Supported 00:25:36.840 Replay Protected Memory Block: Not Supported 00:25:36.840 00:25:36.840 Firmware Slot Information 00:25:36.840 ========================= 00:25:36.840 Active slot: 0 00:25:36.840 00:25:36.840 00:25:36.840 Error Log 00:25:36.840 ========= 00:25:36.840 00:25:36.840 Active Namespaces 00:25:36.840 ================= 00:25:36.840 Discovery Log Page 00:25:36.840 ================== 00:25:36.840 Generation Counter: 2 00:25:36.840 Number of Records: 2 00:25:36.840 Record Format: 0 00:25:36.840 00:25:36.840 Discovery Log Entry 0 00:25:36.840 ---------------------- 00:25:36.840 Transport Type: 1 (RDMA) 00:25:36.840 Address Family: 1 (IPv4) 00:25:36.840 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:36.840 Entry Flags: 00:25:36.840 Duplicate Returned Information: 0 00:25:36.840 Explicit Persistent Connection Support for Discovery: 0 00:25:36.840 Transport Requirements: 00:25:36.840 Secure Channel: Not Specified 00:25:36.840 Port ID: 1 (0x0001) 00:25:36.840 Controller ID: 65535 (0xffff) 00:25:36.840 Admin Max SQ Size: 32 00:25:36.840 Transport Service Identifier: 4420 00:25:36.840 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:36.840 Transport Address: 192.168.100.8 00:25:36.840 Transport Specific Address Subtype - RDMA 00:25:36.840 RDMA QP Service Type: 1 (Reliable Connected) 00:25:36.840 RDMA Provider Type: 1 (No provider specified) 00:25:36.840 RDMA CM Service: 1 (RDMA_CM) 00:25:36.840 Discovery Log Entry 1 00:25:36.840 ---------------------- 00:25:36.840 Transport Type: 1 (RDMA) 00:25:36.840 Address Family: 1 (IPv4) 00:25:36.840 Subsystem Type: 2 (NVM Subsystem) 00:25:36.840 Entry Flags: 00:25:36.840 Duplicate Returned Information: 0 00:25:36.840 Explicit Persistent Connection Support for Discovery: 0 00:25:36.840 Transport Requirements: 00:25:36.840 Secure Channel: Not Specified 00:25:36.840 Port ID: 1 (0x0001) 00:25:36.840 Controller ID: 65535 (0xffff) 00:25:36.840 Admin Max SQ Size: 32 00:25:36.840 Transport Service Identifier: 4420 00:25:36.840 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:36.840 Transport Address: 192.168.100.8 00:25:36.840 Transport Specific Address Subtype - RDMA 00:25:36.840 RDMA QP Service Type: 1 (Reliable Connected) 00:25:36.840 RDMA Provider Type: 1 (No provider specified) 00:25:36.840 RDMA CM Service: 1 (RDMA_CM) 00:25:36.840 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:36.840 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.840 get_feature(0x01) failed 00:25:36.840 get_feature(0x02) failed 00:25:36.840 get_feature(0x04) failed 00:25:36.840 ===================================================== 00:25:36.840 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:36.840 ===================================================== 00:25:36.840 Controller Capabilities/Features 00:25:36.840 ================================ 00:25:36.840 Vendor ID: 0000 00:25:36.840 Subsystem Vendor ID: 0000 00:25:36.840 Serial Number: ac93ad62cd32398a9d27 00:25:36.840 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:36.840 Firmware Version: 6.7.0-68 00:25:36.840 Recommended Arb Burst: 6 00:25:36.840 IEEE OUI Identifier: 00 00 00 00:25:36.840 Multi-path I/O 00:25:36.840 May have multiple subsystem ports: Yes 00:25:36.840 May have multiple controllers: Yes 00:25:36.840 Associated with SR-IOV VF: No 00:25:36.840 Max Data Transfer Size: 1048576 00:25:36.840 Max Number of Namespaces: 1024 00:25:36.840 Max Number of I/O Queues: 128 00:25:36.840 NVMe Specification Version (VS): 1.3 00:25:36.840 NVMe Specification Version (Identify): 1.3 00:25:36.840 Maximum Queue Entries: 128 00:25:36.840 Contiguous Queues Required: No 00:25:36.840 Arbitration Mechanisms Supported 00:25:36.840 Weighted Round Robin: Not Supported 00:25:36.840 Vendor Specific: Not Supported 00:25:36.840 Reset Timeout: 7500 ms 00:25:36.840 Doorbell Stride: 4 bytes 00:25:36.840 NVM Subsystem Reset: Not Supported 00:25:36.840 Command Sets Supported 00:25:36.840 NVM Command Set: Supported 00:25:36.840 Boot Partition: Not Supported 00:25:36.840 Memory Page Size Minimum: 4096 bytes 00:25:36.840 Memory Page Size Maximum: 4096 bytes 00:25:36.840 Persistent Memory Region: Not Supported 00:25:36.840 Optional Asynchronous Events Supported 00:25:36.840 Namespace Attribute Notices: Supported 00:25:36.840 Firmware Activation Notices: Not Supported 00:25:36.840 ANA Change Notices: Supported 00:25:36.840 PLE Aggregate Log Change Notices: Not Supported 00:25:36.840 LBA Status Info Alert Notices: Not Supported 00:25:36.840 EGE Aggregate Log Change Notices: Not Supported 00:25:36.840 Normal NVM Subsystem Shutdown event: Not Supported 00:25:36.840 Zone Descriptor Change Notices: Not Supported 00:25:36.840 Discovery Log Change Notices: Not Supported 00:25:36.840 Controller Attributes 00:25:36.840 128-bit Host Identifier: Supported 00:25:36.840 Non-Operational Permissive Mode: Not Supported 00:25:36.840 NVM Sets: Not Supported 00:25:36.840 Read Recovery Levels: Not Supported 00:25:36.840 Endurance Groups: Not Supported 00:25:36.840 Predictable Latency Mode: Not Supported 00:25:36.840 Traffic Based Keep ALive: Supported 00:25:36.840 Namespace Granularity: Not Supported 00:25:36.840 SQ Associations: Not Supported 00:25:36.840 UUID List: Not Supported 00:25:36.840 Multi-Domain Subsystem: Not Supported 00:25:36.840 Fixed Capacity Management: Not Supported 00:25:36.840 Variable Capacity Management: Not Supported 00:25:36.840 Delete Endurance Group: Not Supported 00:25:36.840 Delete NVM Set: Not Supported 00:25:36.840 Extended LBA Formats Supported: Not Supported 00:25:36.840 Flexible Data Placement Supported: Not Supported 00:25:36.840 00:25:36.840 Controller Memory Buffer Support 00:25:36.840 ================================ 00:25:36.840 Supported: No 00:25:36.840 00:25:36.840 Persistent Memory Region Support 00:25:36.840 ================================ 00:25:36.840 Supported: No 00:25:36.840 00:25:36.840 Admin Command Set Attributes 00:25:36.840 ============================ 00:25:36.840 Security Send/Receive: Not Supported 00:25:36.840 Format NVM: Not Supported 00:25:36.840 Firmware Activate/Download: Not Supported 00:25:36.840 Namespace Management: Not Supported 00:25:36.840 Device Self-Test: Not Supported 00:25:36.840 Directives: Not Supported 00:25:36.840 NVMe-MI: Not Supported 00:25:36.840 Virtualization Management: Not Supported 00:25:36.840 Doorbell Buffer Config: Not Supported 00:25:36.840 Get LBA Status Capability: Not Supported 00:25:36.840 Command & Feature Lockdown Capability: Not Supported 00:25:36.840 Abort Command Limit: 4 00:25:36.840 Async Event Request Limit: 4 00:25:36.840 Number of Firmware Slots: N/A 00:25:36.840 Firmware Slot 1 Read-Only: N/A 00:25:36.840 Firmware Activation Without Reset: N/A 00:25:36.840 Multiple Update Detection Support: N/A 00:25:36.840 Firmware Update Granularity: No Information Provided 00:25:36.841 Per-Namespace SMART Log: Yes 00:25:36.841 Asymmetric Namespace Access Log Page: Supported 00:25:36.841 ANA Transition Time : 10 sec 00:25:36.841 00:25:36.841 Asymmetric Namespace Access Capabilities 00:25:36.841 ANA Optimized State : Supported 00:25:36.841 ANA Non-Optimized State : Supported 00:25:36.841 ANA Inaccessible State : Supported 00:25:36.841 ANA Persistent Loss State : Supported 00:25:36.841 ANA Change State : Supported 00:25:36.841 ANAGRPID is not changed : No 00:25:36.841 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:36.841 00:25:36.841 ANA Group Identifier Maximum : 128 00:25:36.841 Number of ANA Group Identifiers : 128 00:25:36.841 Max Number of Allowed Namespaces : 1024 00:25:36.841 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:36.841 Command Effects Log Page: Supported 00:25:36.841 Get Log Page Extended Data: Supported 00:25:36.841 Telemetry Log Pages: Not Supported 00:25:36.841 Persistent Event Log Pages: Not Supported 00:25:36.841 Supported Log Pages Log Page: May Support 00:25:36.841 Commands Supported & Effects Log Page: Not Supported 00:25:36.841 Feature Identifiers & Effects Log Page:May Support 00:25:36.841 NVMe-MI Commands & Effects Log Page: May Support 00:25:36.841 Data Area 4 for Telemetry Log: Not Supported 00:25:36.841 Error Log Page Entries Supported: 128 00:25:36.841 Keep Alive: Supported 00:25:36.841 Keep Alive Granularity: 1000 ms 00:25:36.841 00:25:36.841 NVM Command Set Attributes 00:25:36.841 ========================== 00:25:36.841 Submission Queue Entry Size 00:25:36.841 Max: 64 00:25:36.841 Min: 64 00:25:36.841 Completion Queue Entry Size 00:25:36.841 Max: 16 00:25:36.841 Min: 16 00:25:36.841 Number of Namespaces: 1024 00:25:36.841 Compare Command: Not Supported 00:25:36.841 Write Uncorrectable Command: Not Supported 00:25:36.841 Dataset Management Command: Supported 00:25:36.841 Write Zeroes Command: Supported 00:25:36.841 Set Features Save Field: Not Supported 00:25:36.841 Reservations: Not Supported 00:25:36.841 Timestamp: Not Supported 00:25:36.841 Copy: Not Supported 00:25:36.841 Volatile Write Cache: Present 00:25:36.841 Atomic Write Unit (Normal): 1 00:25:36.841 Atomic Write Unit (PFail): 1 00:25:36.841 Atomic Compare & Write Unit: 1 00:25:36.841 Fused Compare & Write: Not Supported 00:25:36.841 Scatter-Gather List 00:25:36.841 SGL Command Set: Supported 00:25:36.841 SGL Keyed: Supported 00:25:36.841 SGL Bit Bucket Descriptor: Not Supported 00:25:36.841 SGL Metadata Pointer: Not Supported 00:25:36.841 Oversized SGL: Not Supported 00:25:36.841 SGL Metadata Address: Not Supported 00:25:36.841 SGL Offset: Supported 00:25:36.841 Transport SGL Data Block: Not Supported 00:25:36.841 Replay Protected Memory Block: Not Supported 00:25:36.841 00:25:36.841 Firmware Slot Information 00:25:36.841 ========================= 00:25:36.841 Active slot: 0 00:25:36.841 00:25:36.841 Asymmetric Namespace Access 00:25:36.841 =========================== 00:25:36.841 Change Count : 0 00:25:36.841 Number of ANA Group Descriptors : 1 00:25:36.841 ANA Group Descriptor : 0 00:25:36.841 ANA Group ID : 1 00:25:36.841 Number of NSID Values : 1 00:25:36.841 Change Count : 0 00:25:36.841 ANA State : 1 00:25:36.841 Namespace Identifier : 1 00:25:36.841 00:25:36.841 Commands Supported and Effects 00:25:36.841 ============================== 00:25:36.841 Admin Commands 00:25:36.841 -------------- 00:25:36.841 Get Log Page (02h): Supported 00:25:36.841 Identify (06h): Supported 00:25:36.841 Abort (08h): Supported 00:25:36.841 Set Features (09h): Supported 00:25:36.841 Get Features (0Ah): Supported 00:25:36.841 Asynchronous Event Request (0Ch): Supported 00:25:36.841 Keep Alive (18h): Supported 00:25:36.841 I/O Commands 00:25:36.841 ------------ 00:25:36.841 Flush (00h): Supported 00:25:36.841 Write (01h): Supported LBA-Change 00:25:36.841 Read (02h): Supported 00:25:36.841 Write Zeroes (08h): Supported LBA-Change 00:25:36.841 Dataset Management (09h): Supported 00:25:36.841 00:25:36.841 Error Log 00:25:36.841 ========= 00:25:36.841 Entry: 0 00:25:36.841 Error Count: 0x3 00:25:36.841 Submission Queue Id: 0x0 00:25:36.841 Command Id: 0x5 00:25:36.841 Phase Bit: 0 00:25:36.841 Status Code: 0x2 00:25:36.841 Status Code Type: 0x0 00:25:36.841 Do Not Retry: 1 00:25:37.101 Error Location: 0x28 00:25:37.101 LBA: 0x0 00:25:37.101 Namespace: 0x0 00:25:37.101 Vendor Log Page: 0x0 00:25:37.101 ----------- 00:25:37.101 Entry: 1 00:25:37.101 Error Count: 0x2 00:25:37.101 Submission Queue Id: 0x0 00:25:37.101 Command Id: 0x5 00:25:37.101 Phase Bit: 0 00:25:37.101 Status Code: 0x2 00:25:37.101 Status Code Type: 0x0 00:25:37.101 Do Not Retry: 1 00:25:37.101 Error Location: 0x28 00:25:37.101 LBA: 0x0 00:25:37.101 Namespace: 0x0 00:25:37.101 Vendor Log Page: 0x0 00:25:37.101 ----------- 00:25:37.101 Entry: 2 00:25:37.101 Error Count: 0x1 00:25:37.101 Submission Queue Id: 0x0 00:25:37.101 Command Id: 0x0 00:25:37.101 Phase Bit: 0 00:25:37.101 Status Code: 0x2 00:25:37.101 Status Code Type: 0x0 00:25:37.101 Do Not Retry: 1 00:25:37.101 Error Location: 0x28 00:25:37.101 LBA: 0x0 00:25:37.101 Namespace: 0x0 00:25:37.101 Vendor Log Page: 0x0 00:25:37.101 00:25:37.101 Number of Queues 00:25:37.101 ================ 00:25:37.101 Number of I/O Submission Queues: 128 00:25:37.101 Number of I/O Completion Queues: 128 00:25:37.101 00:25:37.101 ZNS Specific Controller Data 00:25:37.101 ============================ 00:25:37.101 Zone Append Size Limit: 0 00:25:37.101 00:25:37.101 00:25:37.101 Active Namespaces 00:25:37.101 ================= 00:25:37.101 get_feature(0x05) failed 00:25:37.101 Namespace ID:1 00:25:37.101 Command Set Identifier: NVM (00h) 00:25:37.101 Deallocate: Supported 00:25:37.101 Deallocated/Unwritten Error: Not Supported 00:25:37.101 Deallocated Read Value: Unknown 00:25:37.101 Deallocate in Write Zeroes: Not Supported 00:25:37.101 Deallocated Guard Field: 0xFFFF 00:25:37.101 Flush: Supported 00:25:37.101 Reservation: Not Supported 00:25:37.101 Namespace Sharing Capabilities: Multiple Controllers 00:25:37.101 Size (in LBAs): 3907029168 (1863GiB) 00:25:37.101 Capacity (in LBAs): 3907029168 (1863GiB) 00:25:37.101 Utilization (in LBAs): 3907029168 (1863GiB) 00:25:37.101 UUID: 1f914218-532e-484b-af7e-689c4d565db6 00:25:37.101 Thin Provisioning: Not Supported 00:25:37.101 Per-NS Atomic Units: Yes 00:25:37.101 Atomic Boundary Size (Normal): 0 00:25:37.101 Atomic Boundary Size (PFail): 0 00:25:37.101 Atomic Boundary Offset: 0 00:25:37.101 NGUID/EUI64 Never Reused: No 00:25:37.101 ANA group ID: 1 00:25:37.101 Namespace Write Protected: No 00:25:37.101 Number of LBA Formats: 1 00:25:37.101 Current LBA Format: LBA Format #00 00:25:37.101 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:37.101 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:37.101 rmmod nvme_rdma 00:25:37.101 rmmod nvme_fabrics 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:37.101 07:31:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:41.338 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:41.338 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:43.245 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:25:43.245 00:25:43.245 real 0m20.515s 00:25:43.245 user 0m5.429s 00:25:43.245 sys 0m12.288s 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.246 ************************************ 00:25:43.246 END TEST nvmf_identify_kernel_target 00:25:43.246 ************************************ 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.246 ************************************ 00:25:43.246 START TEST nvmf_auth_host 00:25:43.246 ************************************ 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:43.246 * Looking for test storage... 00:25:43.246 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.246 07:31:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:51.371 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:51.371 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:51.371 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:51.371 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:25:51.371 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:51.372 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:51.372 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:51.372 altname enp217s0f0np0 00:25:51.372 altname ens818f0np0 00:25:51.372 inet 192.168.100.8/24 scope global mlx_0_0 00:25:51.372 valid_lft forever preferred_lft forever 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:51.372 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:51.372 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:51.372 altname enp217s0f1np1 00:25:51.372 altname ens818f1np1 00:25:51.372 inet 192.168.100.9/24 scope global mlx_0_1 00:25:51.372 valid_lft forever preferred_lft forever 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:51.372 192.168.100.9' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:51.372 192.168.100.9' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:51.372 192.168.100.9' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2812203 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2812203 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2812203 ']' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.372 07:31:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.939 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.939 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:51.939 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.939 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.939 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f89e4e70b394c4d290cfa7d2467e6b47 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.81U 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f89e4e70b394c4d290cfa7d2467e6b47 0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f89e4e70b394c4d290cfa7d2467e6b47 0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f89e4e70b394c4d290cfa7d2467e6b47 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.81U 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.81U 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.81U 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c83209c197b282f170e71643957c5052458551247959738d9c78d31c6842f6f 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kay 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c83209c197b282f170e71643957c5052458551247959738d9c78d31c6842f6f 3 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c83209c197b282f170e71643957c5052458551247959738d9c78d31c6842f6f 3 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c83209c197b282f170e71643957c5052458551247959738d9c78d31c6842f6f 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kay 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kay 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kay 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=446d5929b979580569d71aedf2e3d9e8fb5dac054a7f8b66 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hpb 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 446d5929b979580569d71aedf2e3d9e8fb5dac054a7f8b66 0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 446d5929b979580569d71aedf2e3d9e8fb5dac054a7f8b66 0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=446d5929b979580569d71aedf2e3d9e8fb5dac054a7f8b66 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hpb 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hpb 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hpb 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bb723d4e6200462a300b0051476249b26e44ac548ff61def 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Fqp 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bb723d4e6200462a300b0051476249b26e44ac548ff61def 2 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bb723d4e6200462a300b0051476249b26e44ac548ff61def 2 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bb723d4e6200462a300b0051476249b26e44ac548ff61def 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:52.199 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Fqp 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Fqp 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Fqp 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1984fc1f89eca4a4703792110dd33d93 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Pjj 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1984fc1f89eca4a4703792110dd33d93 1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1984fc1f89eca4a4703792110dd33d93 1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1984fc1f89eca4a4703792110dd33d93 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Pjj 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Pjj 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Pjj 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40efc7b77a052257538144b4f8c98a09 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZRi 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40efc7b77a052257538144b4f8c98a09 1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40efc7b77a052257538144b4f8c98a09 1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40efc7b77a052257538144b4f8c98a09 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZRi 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZRi 00:25:52.459 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZRi 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6eaee4db28266c46036e2ac7eab641cafb215a0fe1a0447a 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0PH 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6eaee4db28266c46036e2ac7eab641cafb215a0fe1a0447a 2 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6eaee4db28266c46036e2ac7eab641cafb215a0fe1a0447a 2 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6eaee4db28266c46036e2ac7eab641cafb215a0fe1a0447a 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0PH 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0PH 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0PH 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4672b15c0aca901ea77243abdbc57e4d 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aE8 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4672b15c0aca901ea77243abdbc57e4d 0 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4672b15c0aca901ea77243abdbc57e4d 0 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4672b15c0aca901ea77243abdbc57e4d 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:52.460 07:31:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aE8 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aE8 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aE8 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2d74a9ad6d89fb37b073a255887151e26e89370f7de1ca0ccf780e183910fb2 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mZN 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2d74a9ad6d89fb37b073a255887151e26e89370f7de1ca0ccf780e183910fb2 3 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2d74a9ad6d89fb37b073a255887151e26e89370f7de1ca0ccf780e183910fb2 3 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2d74a9ad6d89fb37b073a255887151e26e89370f7de1ca0ccf780e183910fb2 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mZN 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mZN 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mZN 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2812203 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2812203 ']' 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.720 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.81U 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kay ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kay 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hpb 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Fqp ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fqp 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Pjj 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZRi ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZRi 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0PH 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aE8 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aE8 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mZN 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:52.980 07:31:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:57.172 Waiting for block devices as requested 00:25:57.172 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.172 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:57.172 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:57.172 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:57.172 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:57.431 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:57.431 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:57.431 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:57.689 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.689 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:57.689 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:57.689 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:57.947 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:57.947 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:57.947 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.206 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:58.206 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:59.143 No valid GPT data, bailing 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:25:59.143 00:25:59.143 Discovery Log Number of Records 2, Generation counter 2 00:25:59.143 =====Discovery Log Entry 0====== 00:25:59.143 trtype: rdma 00:25:59.143 adrfam: ipv4 00:25:59.143 subtype: current discovery subsystem 00:25:59.143 treq: not specified, sq flow control disable supported 00:25:59.143 portid: 1 00:25:59.143 trsvcid: 4420 00:25:59.143 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:59.143 traddr: 192.168.100.8 00:25:59.143 eflags: none 00:25:59.143 rdma_prtype: not specified 00:25:59.143 rdma_qptype: connected 00:25:59.143 rdma_cms: rdma-cm 00:25:59.143 rdma_pkey: 0x0000 00:25:59.143 =====Discovery Log Entry 1====== 00:25:59.143 trtype: rdma 00:25:59.143 adrfam: ipv4 00:25:59.143 subtype: nvme subsystem 00:25:59.143 treq: not specified, sq flow control disable supported 00:25:59.143 portid: 1 00:25:59.143 trsvcid: 4420 00:25:59.143 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:59.143 traddr: 192.168.100.8 00:25:59.143 eflags: none 00:25:59.143 rdma_prtype: not specified 00:25:59.143 rdma_qptype: connected 00:25:59.143 rdma_cms: rdma-cm 00:25:59.143 rdma_pkey: 0x0000 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.143 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.402 nvme0n1 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.402 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.661 07:31:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.661 nvme0n1 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.661 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.920 nvme0n1 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.920 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 nvme0n1 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.439 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.440 nvme0n1 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.440 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.699 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.699 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.699 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.699 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.699 07:31:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.699 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.700 nvme0n1 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.700 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.961 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.962 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.222 nvme0n1 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.222 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.223 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.223 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.482 nvme0n1 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.482 07:31:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.741 nvme0n1 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.741 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.742 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.001 nvme0n1 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.001 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.260 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.520 nvme0n1 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.520 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.521 07:31:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.780 nvme0n1 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.780 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.781 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.041 nvme0n1 00:26:03.041 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.041 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.041 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.041 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.041 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.300 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.301 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.561 nvme0n1 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.561 07:31:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.561 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.821 nvme0n1 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.821 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.080 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.081 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.386 nvme0n1 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.386 07:31:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.666 nvme0n1 00:26:04.666 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.666 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.666 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.666 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.666 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.925 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.926 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.186 nvme0n1 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.186 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.444 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.445 07:31:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.704 nvme0n1 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.704 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:05.962 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.963 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.222 nvme0n1 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.222 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.482 07:31:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.741 nvme0n1 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.741 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.742 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 nvme0n1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 07:31:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 nvme0n1 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.248 07:31:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.817 nvme0n1 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.817 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.755 nvme0n1 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.755 07:31:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.755 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.324 nvme0n1 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.324 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.583 nvme0n1 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.583 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.584 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.584 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.584 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.584 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.584 07:31:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.842 nvme0n1 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:10.842 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.843 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.103 nvme0n1 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.103 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.364 nvme0n1 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:11.364 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.365 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.624 nvme0n1 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.624 07:31:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.624 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.625 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.884 nvme0n1 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.884 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.885 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.144 nvme0n1 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:12.144 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 nvme0n1 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.663 07:31:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.923 nvme0n1 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.923 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.183 nvme0n1 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.183 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.443 nvme0n1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.443 07:31:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.011 nvme0n1 00:26:14.011 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.011 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.012 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.271 nvme0n1 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.271 07:31:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.531 nvme0n1 00:26:14.531 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.531 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.531 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.531 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.531 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.790 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 nvme0n1 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.050 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.619 nvme0n1 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.619 07:31:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.619 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.189 nvme0n1 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.189 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.449 nvme0n1 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.449 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.708 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.709 07:31:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.968 nvme0n1 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.968 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.969 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.969 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.969 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.969 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.969 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.228 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.488 nvme0n1 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.488 07:31:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.427 nvme0n1 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.427 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 07:31:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.996 nvme0n1 00:26:18.996 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.996 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.996 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.996 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.996 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.997 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.566 nvme0n1 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.566 07:31:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.566 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.171 nvme0n1 00:26:20.171 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.171 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.172 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.431 07:31:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.006 nvme0n1 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.006 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.267 nvme0n1 00:26:21.267 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.267 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.268 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.527 nvme0n1 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.527 07:31:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 nvme0n1 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.788 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.047 nvme0n1 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.047 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.048 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 nvme0n1 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.307 07:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 nvme0n1 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.566 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.567 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.826 nvme0n1 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.826 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.085 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.345 nvme0n1 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.345 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 nvme0n1 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.604 07:31:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.604 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.605 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.864 nvme0n1 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.864 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.122 nvme0n1 00:26:24.122 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.122 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.123 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.382 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 nvme0n1 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.641 07:31:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.641 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.900 nvme0n1 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.900 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.901 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.469 nvme0n1 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.469 07:31:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.728 nvme0n1 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.728 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.729 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.297 nvme0n1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.297 07:31:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 nvme0n1 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.815 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.074 nvme0n1 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.074 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.333 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.334 07:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 nvme0n1 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.852 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 nvme0n1 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.111 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Zjg5ZTRlNzBiMzk0YzRkMjkwY2ZhN2QyNDY3ZTZiNDd3sa0M: 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: ]] 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWM4MzIwOWMxOTdiMjgyZjE3MGU3MTY0Mzk1N2M1MDUyNDU4NTUxMjQ3OTU5NzM4ZDljNzhkMzFjNjg0MmY2ZsE3L8M=: 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.112 07:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 nvme0n1 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.048 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.049 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.617 nvme0n1 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk4NGZjMWY4OWVjYTRhNDcwMzc5MjExMGRkMzNkOTMCyRsX: 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: ]] 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlZmM3Yjc3YTA1MjI1NzUzODE0NGI0ZjhjOThhMDnO/XZ/: 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.617 07:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.618 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.186 nvme0n1 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVhZWU0ZGIyODI2NmM0NjAzNmUyYWM3ZWFiNjQxY2FmYjIxNWEwZmUxYTA0NDdhRBlRPg==: 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDY3MmIxNWMwYWNhOTAxZWE3NzI0M2FiZGJjNTdlNGTrbj94: 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.186 07:32:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.123 nvme0n1 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJkNzRhOWFkNmQ4OWZiMzdiMDczYTI1NTg4NzE1MWUyNmU4OTM3MGY3ZGUxY2EwY2NmNzgwZTE4MzkxMGZiMhciGXk=: 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.123 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.691 nvme0n1 00:26:31.691 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.691 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.691 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.691 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.691 07:32:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZDU5MjliOTc5NTgwNTY5ZDcxYWVkZjJlM2Q5ZThmYjVkYWMwNTRhN2Y4YjY2rOaQmw==: 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmI3MjNkNGU2MjAwNDYyYTMwMGIwMDUxNDc2MjQ5YjI2ZTQ0YWM1NDhmZjYxZGVmgVZvCw==: 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.691 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.691 request: 00:26:31.691 { 00:26:31.691 "name": "nvme0", 00:26:31.691 "trtype": "rdma", 00:26:31.691 "traddr": "192.168.100.8", 00:26:31.691 "adrfam": "ipv4", 00:26:31.691 "trsvcid": "4420", 00:26:31.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:31.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:31.691 "prchk_reftag": false, 00:26:31.691 "prchk_guard": false, 00:26:31.691 "hdgst": false, 00:26:31.691 "ddgst": false, 00:26:31.691 "method": "bdev_nvme_attach_controller", 00:26:31.692 "req_id": 1 00:26:31.692 } 00:26:31.692 Got JSON-RPC error response 00:26:31.692 response: 00:26:31.692 { 00:26:31.692 "code": -5, 00:26:31.692 "message": "Input/output error" 00:26:31.692 } 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.692 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.951 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.952 request: 00:26:31.952 { 00:26:31.952 "name": "nvme0", 00:26:31.952 "trtype": "rdma", 00:26:31.952 "traddr": "192.168.100.8", 00:26:31.952 "adrfam": "ipv4", 00:26:31.952 "trsvcid": "4420", 00:26:31.952 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:31.952 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:31.952 "prchk_reftag": false, 00:26:31.952 "prchk_guard": false, 00:26:31.952 "hdgst": false, 00:26:31.952 "ddgst": false, 00:26:31.952 "dhchap_key": "key2", 00:26:31.952 "method": "bdev_nvme_attach_controller", 00:26:31.952 "req_id": 1 00:26:31.952 } 00:26:31.952 Got JSON-RPC error response 00:26:31.952 response: 00:26:31.952 { 00:26:31.952 "code": -5, 00:26:31.952 "message": "Input/output error" 00:26:31.952 } 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.952 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.212 request: 00:26:32.212 { 00:26:32.212 "name": "nvme0", 00:26:32.212 "trtype": "rdma", 00:26:32.212 "traddr": "192.168.100.8", 00:26:32.212 "adrfam": "ipv4", 00:26:32.212 "trsvcid": "4420", 00:26:32.212 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:32.212 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:32.212 "prchk_reftag": false, 00:26:32.212 "prchk_guard": false, 00:26:32.212 "hdgst": false, 00:26:32.212 "ddgst": false, 00:26:32.212 "dhchap_key": "key1", 00:26:32.212 "dhchap_ctrlr_key": "ckey2", 00:26:32.212 "method": "bdev_nvme_attach_controller", 00:26:32.212 "req_id": 1 00:26:32.212 } 00:26:32.212 Got JSON-RPC error response 00:26:32.212 response: 00:26:32.212 { 00:26:32.212 "code": -5, 00:26:32.212 "message": "Input/output error" 00:26:32.212 } 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:32.212 rmmod nvme_rdma 00:26:32.212 rmmod nvme_fabrics 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2812203 ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2812203 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2812203 ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2812203 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2812203 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2812203' 00:26:32.212 killing process with pid 2812203 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2812203 00:26:32.212 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2812203 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:32.472 07:32:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:36.668 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.668 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.600 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.600 07:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.81U /tmp/spdk.key-null.hpb /tmp/spdk.key-sha256.Pjj /tmp/spdk.key-sha384.0PH /tmp/spdk.key-sha512.mZN /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.600 07:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:42.797 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.797 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:42.797 00:26:42.797 real 0m59.378s 00:26:42.797 user 0m50.674s 00:26:42.797 sys 0m17.477s 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.797 ************************************ 00:26:42.797 END TEST nvmf_auth_host 00:26:42.797 ************************************ 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.797 ************************************ 00:26:42.797 START TEST nvmf_bdevperf 00:26:42.797 ************************************ 00:26:42.797 07:32:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:42.797 * Looking for test storage... 00:26:42.797 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.797 07:32:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:50.926 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:50.926 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:50.927 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:50.927 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:50.927 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:50.927 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:50.927 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:50.927 altname enp217s0f0np0 00:26:50.927 altname ens818f0np0 00:26:50.927 inet 192.168.100.8/24 scope global mlx_0_0 00:26:50.927 valid_lft forever preferred_lft forever 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:50.927 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:50.927 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:50.927 altname enp217s0f1np1 00:26:50.927 altname ens818f1np1 00:26:50.927 inet 192.168.100.9/24 scope global mlx_0_1 00:26:50.927 valid_lft forever preferred_lft forever 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:50.927 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:50.928 192.168.100.9' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:50.928 192.168.100.9' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:50.928 192.168.100.9' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2827964 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2827964 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2827964 ']' 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.928 07:32:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.928 [2024-07-25 07:32:23.443360] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:50.928 [2024-07-25 07:32:23.443413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.186 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.186 [2024-07-25 07:32:23.529710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.186 [2024-07-25 07:32:23.603145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.186 [2024-07-25 07:32:23.603179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.186 [2024-07-25 07:32:23.603189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.186 [2024-07-25 07:32:23.603198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.186 [2024-07-25 07:32:23.603205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.186 [2024-07-25 07:32:23.603247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.186 [2024-07-25 07:32:23.603337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.186 [2024-07-25 07:32:23.603338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.754 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.754 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:51.754 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.754 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.754 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 [2024-07-25 07:32:24.325580] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd9c500/0xda09f0) succeed. 00:26:52.013 [2024-07-25 07:32:24.335218] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd9daa0/0xde2080) succeed. 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 Malloc0 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.013 [2024-07-25 07:32:24.469755] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.013 { 00:26:52.013 "params": { 00:26:52.013 "name": "Nvme$subsystem", 00:26:52.013 "trtype": "$TEST_TRANSPORT", 00:26:52.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.013 "adrfam": "ipv4", 00:26:52.013 "trsvcid": "$NVMF_PORT", 00:26:52.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.013 "hdgst": ${hdgst:-false}, 00:26:52.013 "ddgst": ${ddgst:-false} 00:26:52.013 }, 00:26:52.013 "method": "bdev_nvme_attach_controller" 00:26:52.013 } 00:26:52.013 EOF 00:26:52.013 )") 00:26:52.013 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:52.014 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:52.014 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:52.014 07:32:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.014 "params": { 00:26:52.014 "name": "Nvme1", 00:26:52.014 "trtype": "rdma", 00:26:52.014 "traddr": "192.168.100.8", 00:26:52.014 "adrfam": "ipv4", 00:26:52.014 "trsvcid": "4420", 00:26:52.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.014 "hdgst": false, 00:26:52.014 "ddgst": false 00:26:52.014 }, 00:26:52.014 "method": "bdev_nvme_attach_controller" 00:26:52.014 }' 00:26:52.014 [2024-07-25 07:32:24.507835] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:52.014 [2024-07-25 07:32:24.507882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828246 ] 00:26:52.272 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.272 [2024-07-25 07:32:24.587692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.272 [2024-07-25 07:32:24.657941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.531 Running I/O for 1 seconds... 00:26:53.467 00:26:53.467 Latency(us) 00:26:53.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.467 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.467 Verification LBA range: start 0x0 length 0x4000 00:26:53.467 Nvme1n1 : 1.00 18259.11 71.32 0.00 0.00 6966.73 1028.92 12111.05 00:26:53.467 =================================================================================================================== 00:26:53.467 Total : 18259.11 71.32 0.00 0.00 6966.73 1028.92 12111.05 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2828515 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:53.726 { 00:26:53.726 "params": { 00:26:53.726 "name": "Nvme$subsystem", 00:26:53.726 "trtype": "$TEST_TRANSPORT", 00:26:53.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.726 "adrfam": "ipv4", 00:26:53.726 "trsvcid": "$NVMF_PORT", 00:26:53.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.726 "hdgst": ${hdgst:-false}, 00:26:53.726 "ddgst": ${ddgst:-false} 00:26:53.726 }, 00:26:53.726 "method": "bdev_nvme_attach_controller" 00:26:53.726 } 00:26:53.726 EOF 00:26:53.726 )") 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:53.726 07:32:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:53.726 "params": { 00:26:53.726 "name": "Nvme1", 00:26:53.726 "trtype": "rdma", 00:26:53.726 "traddr": "192.168.100.8", 00:26:53.726 "adrfam": "ipv4", 00:26:53.726 "trsvcid": "4420", 00:26:53.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.726 "hdgst": false, 00:26:53.726 "ddgst": false 00:26:53.726 }, 00:26:53.726 "method": "bdev_nvme_attach_controller" 00:26:53.726 }' 00:26:53.726 [2024-07-25 07:32:26.092512] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:53.726 [2024-07-25 07:32:26.092565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828515 ] 00:26:53.726 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.726 [2024-07-25 07:32:26.177119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.726 [2024-07-25 07:32:26.242705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.985 Running I/O for 15 seconds... 00:26:57.274 07:32:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2827964 00:26:57.274 07:32:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:57.845 [2024-07-25 07:32:30.070575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.845 [2024-07-25 07:32:30.070771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.845 [2024-07-25 07:32:30.070780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.070999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.846 [2024-07-25 07:32:30.071535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.846 [2024-07-25 07:32:30.071543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.071986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.847 [2024-07-25 07:32:30.072014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181d00 00:26:57.847 [2024-07-25 07:32:30.072271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.847 [2024-07-25 07:32:30.072281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181d00 00:26:57.848 [2024-07-25 07:32:30.072984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.848 [2024-07-25 07:32:30.072994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.073112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181d00 00:26:57.849 [2024-07-25 07:32:30.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e3516000 sqhd:52b0 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.075041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.849 [2024-07-25 07:32:30.075056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.849 [2024-07-25 07:32:30.075065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123328 len:8 PRP1 0x0 PRP2 0x0 00:26:57.849 [2024-07-25 07:32:30.075075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.849 [2024-07-25 07:32:30.075118] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:26:57.849 [2024-07-25 07:32:30.077946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.849 [2024-07-25 07:32:30.092607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:57.849 [2024-07-25 07:32:30.095280] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:57.849 [2024-07-25 07:32:30.095300] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:57.849 [2024-07-25 07:32:30.095308] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:58.787 [2024-07-25 07:32:31.099317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:58.787 [2024-07-25 07:32:31.099339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.787 [2024-07-25 07:32:31.099507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.787 [2024-07-25 07:32:31.099517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.787 [2024-07-25 07:32:31.099527] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:58.787 [2024-07-25 07:32:31.102122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.787 [2024-07-25 07:32:31.106254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.787 [2024-07-25 07:32:31.108877] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:58.787 [2024-07-25 07:32:31.108897] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:58.787 [2024-07-25 07:32:31.108905] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:59.724 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2827964 Killed "${NVMF_APP[@]}" "$@" 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2829554 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2829554 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2829554 ']' 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.724 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.724 [2024-07-25 07:32:32.110305] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:59.724 [2024-07-25 07:32:32.110349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.724 [2024-07-25 07:32:32.112788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:59.724 [2024-07-25 07:32:32.112809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.724 [2024-07-25 07:32:32.112982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.724 [2024-07-25 07:32:32.112993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.724 [2024-07-25 07:32:32.113003] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:59.724 [2024-07-25 07:32:32.115683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.724 [2024-07-25 07:32:32.120874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.724 [2024-07-25 07:32:32.123290] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:59.724 [2024-07-25 07:32:32.123310] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:59.724 [2024-07-25 07:32:32.123319] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:59.724 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.724 [2024-07-25 07:32:32.195858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:59.984 [2024-07-25 07:32:32.269100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.984 [2024-07-25 07:32:32.269136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.984 [2024-07-25 07:32:32.269146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.984 [2024-07-25 07:32:32.269154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.984 [2024-07-25 07:32:32.269161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.984 [2024-07-25 07:32:32.269203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.984 [2024-07-25 07:32:32.269289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.984 [2024-07-25 07:32:32.269291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.552 07:32:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.552 [2024-07-25 07:32:32.997467] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a5a500/0x1a5e9f0) succeed. 00:27:00.552 [2024-07-25 07:32:33.006659] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a5baa0/0x1aa0080) succeed. 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 [2024-07-25 07:32:33.127318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:00.812 [2024-07-25 07:32:33.127356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.812 [2024-07-25 07:32:33.127530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.812 [2024-07-25 07:32:33.127542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.812 [2024-07-25 07:32:33.127552] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:00.812 [2024-07-25 07:32:33.130228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.812 Malloc0 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 [2024-07-25 07:32:33.138735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.812 [2024-07-25 07:32:33.141202] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:00.812 [2024-07-25 07:32:33.141223] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:00.812 [2024-07-25 07:32:33.141232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 [2024-07-25 07:32:33.154823] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.812 07:32:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2828515 00:27:01.749 [2024-07-25 07:32:34.145053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:01.749 [2024-07-25 07:32:34.145075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.749 [2024-07-25 07:32:34.145248] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.749 [2024-07-25 07:32:34.145259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.749 [2024-07-25 07:32:34.145269] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:01.749 [2024-07-25 07:32:34.145284] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:01.749 [2024-07-25 07:32:34.147950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.749 [2024-07-25 07:32:34.158170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.749 [2024-07-25 07:32:34.201441] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.895 00:27:09.895 Latency(us) 00:27:09.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.895 Verification LBA range: start 0x0 length 0x4000 00:27:09.895 Nvme1n1 : 15.01 12068.56 47.14 14066.08 0.00 4878.22 326.04 1026765.62 00:27:09.895 =================================================================================================================== 00:27:09.895 Total : 12068.56 47.14 14066.08 0.00 4878.22 326.04 1026765.62 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:09.895 rmmod nvme_rdma 00:27:09.895 rmmod nvme_fabrics 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2829554 ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2829554 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2829554 ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2829554 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2829554 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2829554' 00:27:09.895 killing process with pid 2829554 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2829554 00:27:09.895 07:32:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2829554 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:09.895 00:27:09.895 real 0m27.094s 00:27:09.895 user 1m4.801s 00:27:09.895 sys 0m7.626s 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.895 ************************************ 00:27:09.895 END TEST nvmf_bdevperf 00:27:09.895 ************************************ 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.895 ************************************ 00:27:09.895 START TEST nvmf_target_disconnect 00:27:09.895 ************************************ 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:09.895 * Looking for test storage... 00:27:09.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.895 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.896 07:32:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.019 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:18.020 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:18.020 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:18.020 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:18.020 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:18.020 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:18.021 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:18.021 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:18.021 altname enp217s0f0np0 00:27:18.021 altname ens818f0np0 00:27:18.021 inet 192.168.100.8/24 scope global mlx_0_0 00:27:18.021 valid_lft forever preferred_lft forever 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:18.021 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:18.021 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:18.021 altname enp217s0f1np1 00:27:18.021 altname ens818f1np1 00:27:18.021 inet 192.168.100.9/24 scope global mlx_0_1 00:27:18.021 valid_lft forever preferred_lft forever 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:18.021 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:18.280 192.168.100.9' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:18.280 192.168.100.9' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:18.280 192.168.100.9' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:18.280 ************************************ 00:27:18.280 START TEST nvmf_target_disconnect_tc1 00:27:18.280 ************************************ 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:27:18.280 07:32:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:18.280 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.539 [2024-07-25 07:32:50.830038] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:18.539 [2024-07-25 07:32:50.830083] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:18.539 [2024-07-25 07:32:50.830092] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:27:19.476 [2024-07-25 07:32:51.833821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:19.476 [2024-07-25 07:32:51.833854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:19.476 [2024-07-25 07:32:51.833865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:27:19.476 [2024-07-25 07:32:51.833909] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:19.476 [2024-07-25 07:32:51.833919] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:19.476 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:27:19.476 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:19.476 Initializing NVMe Controllers 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:19.476 00:27:19.476 real 0m1.152s 00:27:19.476 user 0m0.857s 00:27:19.476 sys 0m0.285s 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:19.476 ************************************ 00:27:19.476 END TEST nvmf_target_disconnect_tc1 00:27:19.476 ************************************ 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:19.476 ************************************ 00:27:19.476 START TEST nvmf_target_disconnect_tc2 00:27:19.476 ************************************ 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2835388 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2835388 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2835388 ']' 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.476 07:32:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.476 [2024-07-25 07:32:51.981635] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:19.476 [2024-07-25 07:32:51.981699] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.735 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.735 [2024-07-25 07:32:52.076991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.735 [2024-07-25 07:32:52.147824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.735 [2024-07-25 07:32:52.147867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.735 [2024-07-25 07:32:52.147876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.735 [2024-07-25 07:32:52.147885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.735 [2024-07-25 07:32:52.147891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.735 [2024-07-25 07:32:52.148021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:27:19.735 [2024-07-25 07:32:52.148171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:27:19.735 [2024-07-25 07:32:52.148278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:19.735 [2024-07-25 07:32:52.148279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:27:20.302 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.302 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:20.302 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.302 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:20.302 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 Malloc0 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 [2024-07-25 07:32:52.878106] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x215ce40/0x2168bc0) succeed. 00:27:20.561 [2024-07-25 07:32:52.887848] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x215e480/0x21aa250) succeed. 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 [2024-07-25 07:32:53.026028] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2835670 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:20.561 07:32:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:20.820 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.721 07:32:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2835388 00:27:22.721 07:32:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Write completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.100 starting I/O failed 00:27:24.100 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Read completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 Write completed with error (sct=0, sc=8) 00:27:24.101 starting I/O failed 00:27:24.101 [2024-07-25 07:32:56.239931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.669 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2835388 Killed "${NVMF_APP[@]}" "$@" 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2836218 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2836218 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2836218 ']' 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.669 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.669 [2024-07-25 07:32:57.104371] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:24.669 [2024-07-25 07:32:57.104422] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.669 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.929 [2024-07-25 07:32:57.204899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Read completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 Write completed with error (sct=0, sc=8) 00:27:24.929 starting I/O failed 00:27:24.929 [2024-07-25 07:32:57.244877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.929 [2024-07-25 07:32:57.273518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.929 [2024-07-25 07:32:57.273556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.929 [2024-07-25 07:32:57.273565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.929 [2024-07-25 07:32:57.273573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.929 [2024-07-25 07:32:57.273596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.929 [2024-07-25 07:32:57.273732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:27:24.929 [2024-07-25 07:32:57.273841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:27:24.929 [2024-07-25 07:32:57.273949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.929 [2024-07-25 07:32:57.273950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.498 Malloc0 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.498 07:32:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.498 [2024-07-25 07:32:58.003829] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x175ee40/0x176abc0) succeed. 00:27:25.498 [2024-07-25 07:32:58.013381] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1760480/0x17ac250) succeed. 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.758 [2024-07-25 07:32:58.150430] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.758 07:32:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2835670 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Read completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 Write completed with error (sct=0, sc=8) 00:27:25.758 starting I/O failed 00:27:25.758 [2024-07-25 07:32:58.249962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Write completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 Read completed with error (sct=0, sc=8) 00:27:27.142 starting I/O failed 00:27:27.142 [2024-07-25 07:32:59.255030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.142 [2024-07-25 07:32:59.261491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.142 [2024-07-25 07:32:59.261541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.142 [2024-07-25 07:32:59.261564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.142 [2024-07-25 07:32:59.261575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.142 [2024-07-25 07:32:59.261584] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.142 [2024-07-25 07:32:59.271942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.142 qpair failed and we were unable to recover it. 00:27:27.142 [2024-07-25 07:32:59.281426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.142 [2024-07-25 07:32:59.281466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.142 [2024-07-25 07:32:59.281483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.142 [2024-07-25 07:32:59.281493] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.142 [2024-07-25 07:32:59.281502] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.291990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.301475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.301520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.301537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.301546] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.301555] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.312173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.321506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.321550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.321567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.321576] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.321585] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.331952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.341552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.341597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.341614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.341624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.341640] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.352076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.361602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.361639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.361656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.361665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.361674] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.372051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.381670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.381710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.381727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.381737] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.381746] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.392200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.401721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.401760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.401776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.401785] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.401794] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.412276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.421726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.421763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.421780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.421789] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.421798] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.432329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.441915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.441955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.441971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.441981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.441989] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.452342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.461990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.462024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.462040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.462050] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.462058] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.472482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.482054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.482093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.482110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.482119] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.482128] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.492559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.502025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.502065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.502082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.502091] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.502100] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.512513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.522098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.522138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.522154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.522166] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.522175] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.143 [2024-07-25 07:32:59.532503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.143 qpair failed and we were unable to recover it. 00:27:27.143 [2024-07-25 07:32:59.542226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.143 [2024-07-25 07:32:59.542266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.143 [2024-07-25 07:32:59.542282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.143 [2024-07-25 07:32:59.542291] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.143 [2024-07-25 07:32:59.542299] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.552471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.562266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.562305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.562321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.562330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.562339] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.572837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.582255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.582290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.582306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.582316] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.582324] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.592698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.602346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.602382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.602398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.602408] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.602416] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.612799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.622355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.622388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.622404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.622413] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.622421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.632911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.642404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.642441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.642457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.642466] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.642475] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.144 [2024-07-25 07:32:59.653052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.144 qpair failed and we were unable to recover it. 00:27:27.144 [2024-07-25 07:32:59.662505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.144 [2024-07-25 07:32:59.662543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.144 [2024-07-25 07:32:59.662559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.144 [2024-07-25 07:32:59.662568] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.144 [2024-07-25 07:32:59.662577] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.404 [2024-07-25 07:32:59.673001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.404 qpair failed and we were unable to recover it. 00:27:27.404 [2024-07-25 07:32:59.682620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.404 [2024-07-25 07:32:59.682659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.404 [2024-07-25 07:32:59.682675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.404 [2024-07-25 07:32:59.682685] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.404 [2024-07-25 07:32:59.682694] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.404 [2024-07-25 07:32:59.693014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.404 qpair failed and we were unable to recover it. 00:27:27.404 [2024-07-25 07:32:59.702597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.404 [2024-07-25 07:32:59.702639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.404 [2024-07-25 07:32:59.702660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.404 [2024-07-25 07:32:59.702669] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.404 [2024-07-25 07:32:59.702678] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.404 [2024-07-25 07:32:59.712961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.404 qpair failed and we were unable to recover it. 00:27:27.404 [2024-07-25 07:32:59.722668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.404 [2024-07-25 07:32:59.722706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.404 [2024-07-25 07:32:59.722722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.404 [2024-07-25 07:32:59.722731] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.404 [2024-07-25 07:32:59.722740] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.404 [2024-07-25 07:32:59.733288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.404 qpair failed and we were unable to recover it. 00:27:27.404 [2024-07-25 07:32:59.742770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.404 [2024-07-25 07:32:59.742808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.742824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.742833] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.742842] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.753270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.762735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.762773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.762789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.762798] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.762807] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.773339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.782885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.782924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.782940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.782950] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.782961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.793364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.803005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.803044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.803060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.803069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.803078] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.813505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.822987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.823023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.823040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.823049] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.823059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.833638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.843064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.843100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.843116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.843126] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.843134] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.853304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.863081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.863121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.863136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.863146] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.863154] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.873468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.883202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.883241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.883257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.883266] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.883275] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.893504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.903146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.903188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.903204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.903213] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.903222] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.405 [2024-07-25 07:32:59.913584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.405 qpair failed and we were unable to recover it. 00:27:27.405 [2024-07-25 07:32:59.923212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.405 [2024-07-25 07:32:59.923252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.405 [2024-07-25 07:32:59.923268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.405 [2024-07-25 07:32:59.923277] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.405 [2024-07-25 07:32:59.923287] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.665 [2024-07-25 07:32:59.933714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-25 07:32:59.943252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.665 [2024-07-25 07:32:59.943290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.665 [2024-07-25 07:32:59.943306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.665 [2024-07-25 07:32:59.943315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.665 [2024-07-25 07:32:59.943324] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.665 [2024-07-25 07:32:59.953614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-25 07:32:59.963438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.665 [2024-07-25 07:32:59.963476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.665 [2024-07-25 07:32:59.963492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.665 [2024-07-25 07:32:59.963505] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.665 [2024-07-25 07:32:59.963514] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.665 [2024-07-25 07:32:59.973784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-25 07:32:59.983466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.665 [2024-07-25 07:32:59.983502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.665 [2024-07-25 07:32:59.983519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.665 [2024-07-25 07:32:59.983528] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.665 [2024-07-25 07:32:59.983537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.665 [2024-07-25 07:32:59.993747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-25 07:33:00.003493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.003534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.003550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.003560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.003569] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.013916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.023377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.023421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.023437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.023446] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.023455] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.033748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.043616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.043662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.043678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.043687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.043696] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.054002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.063685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.063728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.063744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.063753] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.063762] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.074088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.083783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.083820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.083838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.083847] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.083857] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.094242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.103748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.103780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.103797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.103806] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.103816] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.114061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.123796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.123836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.123852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.123861] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.123870] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.134140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.143896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.143936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.143956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.143965] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.143974] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.154189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.164010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.164047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.164062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.164072] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.164080] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.666 [2024-07-25 07:33:00.174317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-25 07:33:00.184037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.666 [2024-07-25 07:33:00.184070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.666 [2024-07-25 07:33:00.184086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.666 [2024-07-25 07:33:00.184096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.666 [2024-07-25 07:33:00.184104] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.929 [2024-07-25 07:33:00.194354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.929 qpair failed and we were unable to recover it. 00:27:27.929 [2024-07-25 07:33:00.204088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.929 [2024-07-25 07:33:00.204127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.929 [2024-07-25 07:33:00.204144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.929 [2024-07-25 07:33:00.204153] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.929 [2024-07-25 07:33:00.204162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.929 [2024-07-25 07:33:00.214489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.929 qpair failed and we were unable to recover it. 00:27:27.929 [2024-07-25 07:33:00.224312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.929 [2024-07-25 07:33:00.224349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.929 [2024-07-25 07:33:00.224364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.929 [2024-07-25 07:33:00.224374] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.929 [2024-07-25 07:33:00.224385] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.929 [2024-07-25 07:33:00.234505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.929 qpair failed and we were unable to recover it. 00:27:27.929 [2024-07-25 07:33:00.244801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.929 [2024-07-25 07:33:00.244841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.929 [2024-07-25 07:33:00.244857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.929 [2024-07-25 07:33:00.244866] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.929 [2024-07-25 07:33:00.244875] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.929 [2024-07-25 07:33:00.254374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.929 qpair failed and we were unable to recover it. 00:27:27.929 [2024-07-25 07:33:00.264195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.929 [2024-07-25 07:33:00.264233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.929 [2024-07-25 07:33:00.264249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.929 [2024-07-25 07:33:00.264258] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.264266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.274663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.284280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.284317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.284332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.284342] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.284351] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.294713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.304356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.304392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.304408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.304418] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.304427] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.314770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.324325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.324366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.324383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.324392] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.324401] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.334822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.344422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.344459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.344474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.344484] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.344492] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.354961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.364580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.364621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.364642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.364652] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.364661] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.374859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.384529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.384576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.384594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.384603] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.384613] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.394891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.404532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.404571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.404587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.404599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.404608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.415115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.424578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.424618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.424639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.930 [2024-07-25 07:33:00.424648] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.930 [2024-07-25 07:33:00.424657] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.930 [2024-07-25 07:33:00.435000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.930 qpair failed and we were unable to recover it. 00:27:27.930 [2024-07-25 07:33:00.444688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.930 [2024-07-25 07:33:00.444724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.930 [2024-07-25 07:33:00.444740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.931 [2024-07-25 07:33:00.444750] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.931 [2024-07-25 07:33:00.444758] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:27.931 [2024-07-25 07:33:00.455030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:27.931 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.464898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.464941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.464957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.464967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.464976] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.474973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.484868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.484909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.484926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.484935] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.484943] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.495453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.504885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.504927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.504943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.504952] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.504960] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.515297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.525021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.525058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.525074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.525083] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.525092] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.535432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.545038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.545078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.545094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.545104] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.545113] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.555277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.565144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.565181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.565197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.565206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.565215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.575611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.585132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.585173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.585192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.585201] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.585210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.595585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.605178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.605219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.605235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.605245] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.605253] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.615543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.625175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.191 [2024-07-25 07:33:00.625217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.191 [2024-07-25 07:33:00.625232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.191 [2024-07-25 07:33:00.625242] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.191 [2024-07-25 07:33:00.625251] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.191 [2024-07-25 07:33:00.635663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.191 qpair failed and we were unable to recover it. 00:27:28.191 [2024-07-25 07:33:00.645306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.192 [2024-07-25 07:33:00.645342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.192 [2024-07-25 07:33:00.645357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.192 [2024-07-25 07:33:00.645367] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.192 [2024-07-25 07:33:00.645375] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.192 [2024-07-25 07:33:00.655810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.192 qpair failed and we were unable to recover it. 00:27:28.192 [2024-07-25 07:33:00.665406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.192 [2024-07-25 07:33:00.665440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.192 [2024-07-25 07:33:00.665456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.192 [2024-07-25 07:33:00.665465] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.192 [2024-07-25 07:33:00.665478] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.192 [2024-07-25 07:33:00.675869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.192 qpair failed and we were unable to recover it. 00:27:28.192 [2024-07-25 07:33:00.685509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.192 [2024-07-25 07:33:00.685547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.192 [2024-07-25 07:33:00.685563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.192 [2024-07-25 07:33:00.685573] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.192 [2024-07-25 07:33:00.685582] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.192 [2024-07-25 07:33:00.695759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.192 qpair failed and we were unable to recover it. 00:27:28.192 [2024-07-25 07:33:00.705520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.192 [2024-07-25 07:33:00.705557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.192 [2024-07-25 07:33:00.705573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.192 [2024-07-25 07:33:00.705582] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.192 [2024-07-25 07:33:00.705591] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.192 [2024-07-25 07:33:00.716034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.192 qpair failed and we were unable to recover it. 00:27:28.450 [2024-07-25 07:33:00.725611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.725655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.725671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.725680] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.725689] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.736003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.745631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.745667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.745683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.745692] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.745701] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.756115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.765671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.765710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.765726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.765736] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.765745] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.776038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.785762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.785801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.785817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.785826] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.785835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.796019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.805664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.805698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.805713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.805722] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.805731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.816306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.825884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.825921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.825937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.825946] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.825955] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.836354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.846036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.846073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.846089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.846102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.846111] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.856304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.865963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.866004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.866020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.866030] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.866039] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.876329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.886097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.886136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.886152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.886162] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.886170] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.896465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.906069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.906103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.906119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.906128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.906137] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.916421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.926303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.926341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.926360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.926370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.926380] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.936668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.946202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.946245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.946261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.946270] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.946278] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.956658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.451 [2024-07-25 07:33:00.966289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.451 [2024-07-25 07:33:00.966323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.451 [2024-07-25 07:33:00.966340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.451 [2024-07-25 07:33:00.966349] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.451 [2024-07-25 07:33:00.966358] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.451 [2024-07-25 07:33:00.976670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.451 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:00.986362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:00.986398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:00.986415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:00.986424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:00.986433] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:00.996868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.006354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.006390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.006405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.006415] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.006423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.016971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.026586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.026639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.026661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.026671] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.026679] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.037175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.046677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.046712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.046728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.046738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.046747] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.057252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.066622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.066665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.066681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.066691] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.066701] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.077241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.086719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.086759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.086776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.086786] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.086795] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.097106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.106821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.711 [2024-07-25 07:33:01.106860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.711 [2024-07-25 07:33:01.106876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.711 [2024-07-25 07:33:01.106885] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.711 [2024-07-25 07:33:01.106897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.711 [2024-07-25 07:33:01.117168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.711 qpair failed and we were unable to recover it. 00:27:28.711 [2024-07-25 07:33:01.126758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.126794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.126810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.126819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.126828] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.137528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.712 [2024-07-25 07:33:01.146987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.147026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.147042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.147051] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.147060] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.157510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.712 [2024-07-25 07:33:01.167065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.167104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.167120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.167130] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.167139] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.177427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.712 [2024-07-25 07:33:01.186954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.186995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.187012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.187022] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.187031] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.197478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.712 [2024-07-25 07:33:01.207123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.207163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.207178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.207187] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.207196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.217628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.712 [2024-07-25 07:33:01.227195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.712 [2024-07-25 07:33:01.227234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.712 [2024-07-25 07:33:01.227250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.712 [2024-07-25 07:33:01.227259] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.712 [2024-07-25 07:33:01.227268] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.712 [2024-07-25 07:33:01.237879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.712 qpair failed and we were unable to recover it. 00:27:28.974 [2024-07-25 07:33:01.247317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.974 [2024-07-25 07:33:01.247356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.974 [2024-07-25 07:33:01.247372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.974 [2024-07-25 07:33:01.247381] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.974 [2024-07-25 07:33:01.247390] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.974 [2024-07-25 07:33:01.257863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.974 qpair failed and we were unable to recover it. 00:27:28.974 [2024-07-25 07:33:01.267305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.974 [2024-07-25 07:33:01.267350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.974 [2024-07-25 07:33:01.267366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.974 [2024-07-25 07:33:01.267375] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.974 [2024-07-25 07:33:01.267384] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.974 [2024-07-25 07:33:01.277636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.974 qpair failed and we were unable to recover it. 00:27:28.974 [2024-07-25 07:33:01.287348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.974 [2024-07-25 07:33:01.287381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.974 [2024-07-25 07:33:01.287398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.974 [2024-07-25 07:33:01.287410] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.974 [2024-07-25 07:33:01.287419] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.974 [2024-07-25 07:33:01.297801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.974 qpair failed and we were unable to recover it. 00:27:28.974 [2024-07-25 07:33:01.307421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.974 [2024-07-25 07:33:01.307459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.974 [2024-07-25 07:33:01.307475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.307484] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.307493] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.317947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.327471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.327509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.327526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.327536] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.327544] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.338006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.347527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.347567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.347583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.347592] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.347601] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.357942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.367586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.367621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.367641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.367651] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.367659] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.378231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.387720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.387752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.387768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.387777] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.387786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.398250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.407754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.407792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.407807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.407817] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.407825] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.418273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.427819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.427858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.427875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.427884] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.427893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.438031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.447928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.447969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.447984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.447993] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.448002] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.458258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.467979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.468018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.468037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.468047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.468055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.478399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:28.975 [2024-07-25 07:33:01.488016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.975 [2024-07-25 07:33:01.488054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.975 [2024-07-25 07:33:01.488071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.975 [2024-07-25 07:33:01.488080] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.975 [2024-07-25 07:33:01.488089] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:28.975 [2024-07-25 07:33:01.498472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.975 qpair failed and we were unable to recover it. 00:27:29.299 [2024-07-25 07:33:01.508056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.299 [2024-07-25 07:33:01.508094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.299 [2024-07-25 07:33:01.508111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.299 [2024-07-25 07:33:01.508121] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.299 [2024-07-25 07:33:01.508129] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.299 [2024-07-25 07:33:01.518532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.299 qpair failed and we were unable to recover it. 00:27:29.299 [2024-07-25 07:33:01.528093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.299 [2024-07-25 07:33:01.528131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.299 [2024-07-25 07:33:01.528146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.299 [2024-07-25 07:33:01.528156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.299 [2024-07-25 07:33:01.528164] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.299 [2024-07-25 07:33:01.538500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.299 qpair failed and we were unable to recover it. 00:27:29.299 [2024-07-25 07:33:01.548117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.299 [2024-07-25 07:33:01.548157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.299 [2024-07-25 07:33:01.548173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.299 [2024-07-25 07:33:01.548182] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.299 [2024-07-25 07:33:01.548194] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.299 [2024-07-25 07:33:01.558712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.299 qpair failed and we were unable to recover it. 00:27:29.299 [2024-07-25 07:33:01.568378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.299 [2024-07-25 07:33:01.568420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.299 [2024-07-25 07:33:01.568435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.299 [2024-07-25 07:33:01.568445] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.299 [2024-07-25 07:33:01.568453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.299 [2024-07-25 07:33:01.578630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.299 qpair failed and we were unable to recover it. 00:27:29.299 [2024-07-25 07:33:01.588333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.299 [2024-07-25 07:33:01.588371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.588388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.588397] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.588406] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.598609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.608354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.608392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.608408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.608417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.608425] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.618795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.628386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.628425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.628441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.628451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.628459] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.638859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.648453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.648494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.648510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.648519] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.648528] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.659086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.668638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.668680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.668696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.668706] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.668715] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.679002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.688614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.688656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.688672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.688681] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.688690] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.699198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.708755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.708789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.708805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.708814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.708822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.719180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.728769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.728807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.728823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.728836] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.728845] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.738997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.748774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.748815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.748831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.748840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.748849] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.759099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.768833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.768865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.768881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.768890] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.768899] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.779366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.788954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.788991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.789008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.789017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.789025] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.799521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.300 [2024-07-25 07:33:01.808998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.300 [2024-07-25 07:33:01.809040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.300 [2024-07-25 07:33:01.809056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.300 [2024-07-25 07:33:01.809065] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.300 [2024-07-25 07:33:01.809073] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.300 [2024-07-25 07:33:01.819447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.300 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.828852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.828895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.828913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.828922] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.561 [2024-07-25 07:33:01.828931] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.561 [2024-07-25 07:33:01.839466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.561 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.849128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.849165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.849181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.849191] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.561 [2024-07-25 07:33:01.849199] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.561 [2024-07-25 07:33:01.859420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.561 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.869231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.869264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.869280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.869289] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.561 [2024-07-25 07:33:01.869298] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.561 [2024-07-25 07:33:01.879556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.561 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.889229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.889269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.889285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.889294] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.561 [2024-07-25 07:33:01.889303] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.561 [2024-07-25 07:33:01.899691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.561 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.909250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.909291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.909310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.909320] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.561 [2024-07-25 07:33:01.909328] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.561 [2024-07-25 07:33:01.919486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.561 qpair failed and we were unable to recover it. 00:27:29.561 [2024-07-25 07:33:01.929365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.561 [2024-07-25 07:33:01.929406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.561 [2024-07-25 07:33:01.929423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.561 [2024-07-25 07:33:01.929432] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:01.929441] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:01.939630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:01.949154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:01.949192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:01.949207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:01.949216] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:01.949225] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:01.959642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:01.969443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:01.969483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:01.969499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:01.969508] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:01.969517] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:01.979928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:01.989413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:01.989452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:01.989468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:01.989478] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:01.989489] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:01.999680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:02.009447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:02.009487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:02.009504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:02.009513] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:02.009522] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:02.019907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:02.029459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:02.029497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:02.029513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:02.029522] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:02.029531] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:02.039933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:02.049493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:02.049532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:02.049548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:02.049557] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:02.049566] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:02.059871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:02.069704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:02.069746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:02.069762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:02.069771] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:02.069779] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.562 [2024-07-25 07:33:02.080133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.562 qpair failed and we were unable to recover it. 00:27:29.562 [2024-07-25 07:33:02.089656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.562 [2024-07-25 07:33:02.089697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.562 [2024-07-25 07:33:02.089712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.562 [2024-07-25 07:33:02.089723] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.562 [2024-07-25 07:33:02.089732] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.100048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.109707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.109744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.109760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.109770] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.109778] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.120151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.129789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.129828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.129843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.129853] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.129862] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.140081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.149831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.149871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.149886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.149896] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.149904] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.160282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.170002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.170040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.170055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.170071] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.170079] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.180237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.189969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.190005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.190020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.190030] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.190038] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.200199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.210013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.210054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.210069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.210078] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.210087] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.220346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.229974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.230014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.230029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.230039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.230047] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.240457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.250164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.250201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.250217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.250226] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.250235] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.260500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.270206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.270246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.270262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.270272] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.270281] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.280499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.822 qpair failed and we were unable to recover it. 00:27:29.822 [2024-07-25 07:33:02.290300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.822 [2024-07-25 07:33:02.290340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.822 [2024-07-25 07:33:02.290355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.822 [2024-07-25 07:33:02.290364] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.822 [2024-07-25 07:33:02.290373] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.822 [2024-07-25 07:33:02.300616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.823 qpair failed and we were unable to recover it. 00:27:29.823 [2024-07-25 07:33:02.310335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.823 [2024-07-25 07:33:02.310372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.823 [2024-07-25 07:33:02.310387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.823 [2024-07-25 07:33:02.310396] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.823 [2024-07-25 07:33:02.310405] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.823 [2024-07-25 07:33:02.320638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.823 qpair failed and we were unable to recover it. 00:27:29.823 [2024-07-25 07:33:02.330275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.823 [2024-07-25 07:33:02.330312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.823 [2024-07-25 07:33:02.330327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.823 [2024-07-25 07:33:02.330336] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.823 [2024-07-25 07:33:02.330345] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:29.823 [2024-07-25 07:33:02.340570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.823 qpair failed and we were unable to recover it. 00:27:29.823 [2024-07-25 07:33:02.350344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.823 [2024-07-25 07:33:02.350381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.823 [2024-07-25 07:33:02.350400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.823 [2024-07-25 07:33:02.350409] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.823 [2024-07-25 07:33:02.350418] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.082 [2024-07-25 07:33:02.360651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-07-25 07:33:02.370463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.082 [2024-07-25 07:33:02.370502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.370519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.370528] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.370537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.380949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.390549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.390595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.390611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.390621] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.390634] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.400867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.410555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.410597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.410613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.410622] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.410636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.420948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.430562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.430598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.430613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.430623] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.430639] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.440960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.450596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.450639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.450654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.450663] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.450672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.460902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.470728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.470771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.470786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.470795] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.470804] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.481234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.490734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.490768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.490784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.490793] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.490802] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.501071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.510806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.510847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.510863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.510872] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.510880] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.521366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.530986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.531024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.531040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.531049] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.531057] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.541264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.550959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.550994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.551011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.551020] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.551028] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.561410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.571090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.571125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.571141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.571150] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.571159] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.581401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.591065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.083 [2024-07-25 07:33:02.591098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.083 [2024-07-25 07:33:02.591114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.083 [2024-07-25 07:33:02.591124] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.083 [2024-07-25 07:33:02.591132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.083 [2024-07-25 07:33:02.601170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-07-25 07:33:02.611186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.343 [2024-07-25 07:33:02.611221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.343 [2024-07-25 07:33:02.611237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.343 [2024-07-25 07:33:02.611253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.343 [2024-07-25 07:33:02.611263] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.343 [2024-07-25 07:33:02.621606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-07-25 07:33:02.631158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.343 [2024-07-25 07:33:02.631201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.343 [2024-07-25 07:33:02.631217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.343 [2024-07-25 07:33:02.631226] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.343 [2024-07-25 07:33:02.631235] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.343 [2024-07-25 07:33:02.641519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-07-25 07:33:02.651289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.343 [2024-07-25 07:33:02.651323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.651339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.651349] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.651357] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.661657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.671264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.671298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.671313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.671322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.671331] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.681639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.691291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.691331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.691347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.691356] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.691365] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.701921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.711368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.711410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.711426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.711435] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.711444] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.721879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.731478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.731514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.731531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.731540] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.731549] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.741906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.751526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.751564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.751580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.751589] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.751598] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.761939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.771591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.771634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.771650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.771660] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.771668] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.782104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.791745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.791786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.791805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.791815] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.791823] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.802164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.811810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.811849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.811865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.811874] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.811883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.822116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.831775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.831814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.831830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.831839] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.831848] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.842042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.851906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.851942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.851958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.851967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.851976] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.344 [2024-07-25 07:33:02.862256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-07-25 07:33:02.871869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.344 [2024-07-25 07:33:02.871914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.344 [2024-07-25 07:33:02.871929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.344 [2024-07-25 07:33:02.871938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.344 [2024-07-25 07:33:02.871950] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.604 [2024-07-25 07:33:02.882343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.604 qpair failed and we were unable to recover it. 00:27:30.604 [2024-07-25 07:33:02.891980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.604 [2024-07-25 07:33:02.892020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.604 [2024-07-25 07:33:02.892036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.604 [2024-07-25 07:33:02.892046] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.604 [2024-07-25 07:33:02.892054] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.604 [2024-07-25 07:33:02.902314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.604 qpair failed and we were unable to recover it. 00:27:30.604 [2024-07-25 07:33:02.912005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.604 [2024-07-25 07:33:02.912044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.604 [2024-07-25 07:33:02.912060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.604 [2024-07-25 07:33:02.912069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.604 [2024-07-25 07:33:02.912077] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.604 [2024-07-25 07:33:02.922367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.604 qpair failed and we were unable to recover it. 00:27:30.604 [2024-07-25 07:33:02.932134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.604 [2024-07-25 07:33:02.932170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.604 [2024-07-25 07:33:02.932186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.604 [2024-07-25 07:33:02.932195] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.604 [2024-07-25 07:33:02.932203] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.604 [2024-07-25 07:33:02.942340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:02.952158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:02.952199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:02.952214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:02.952224] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:02.952232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:02.962620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:02.972192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:02.972229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:02.972246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:02.972255] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:02.972263] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:02.982586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:02.992253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:02.992290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:02.992306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:02.992315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:02.992324] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.002769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.012328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.012367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.012383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.012392] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.012401] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.022792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.032383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.032420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.032436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.032445] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.032453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.042724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.052547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.052585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.052601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.052613] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.052622] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.062866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.072510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.072547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.072563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.072573] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.072581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.082827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.092500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.092538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.092554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.092563] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.092572] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.103129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.112568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.112615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.112635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.112645] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.112653] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.605 [2024-07-25 07:33:03.122986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.605 qpair failed and we were unable to recover it. 00:27:30.605 [2024-07-25 07:33:03.132648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.605 [2024-07-25 07:33:03.132685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.605 [2024-07-25 07:33:03.132701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.605 [2024-07-25 07:33:03.132710] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.605 [2024-07-25 07:33:03.132718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.865 [2024-07-25 07:33:03.143057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.865 qpair failed and we were unable to recover it. 00:27:30.865 [2024-07-25 07:33:03.152755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.865 [2024-07-25 07:33:03.152795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.865 [2024-07-25 07:33:03.152811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.865 [2024-07-25 07:33:03.152820] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.865 [2024-07-25 07:33:03.152829] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.865 [2024-07-25 07:33:03.163135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.865 qpair failed and we were unable to recover it. 00:27:30.865 [2024-07-25 07:33:03.172683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.865 [2024-07-25 07:33:03.172722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.865 [2024-07-25 07:33:03.172738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.865 [2024-07-25 07:33:03.172747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.865 [2024-07-25 07:33:03.172756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.865 [2024-07-25 07:33:03.183180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.865 qpair failed and we were unable to recover it. 00:27:30.865 [2024-07-25 07:33:03.192834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.865 [2024-07-25 07:33:03.192869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.865 [2024-07-25 07:33:03.192885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.865 [2024-07-25 07:33:03.192894] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.865 [2024-07-25 07:33:03.192903] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.865 [2024-07-25 07:33:03.203359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.865 qpair failed and we were unable to recover it. 00:27:30.865 [2024-07-25 07:33:03.212884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.865 [2024-07-25 07:33:03.212927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.865 [2024-07-25 07:33:03.212943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.865 [2024-07-25 07:33:03.212953] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.865 [2024-07-25 07:33:03.212961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.865 [2024-07-25 07:33:03.223576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.865 qpair failed and we were unable to recover it. 00:27:30.865 [2024-07-25 07:33:03.232918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.866 [2024-07-25 07:33:03.232955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.866 [2024-07-25 07:33:03.232976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.866 [2024-07-25 07:33:03.232985] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.866 [2024-07-25 07:33:03.232994] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.866 [2024-07-25 07:33:03.243471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.866 qpair failed and we were unable to recover it. 00:27:30.866 [2024-07-25 07:33:03.253067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.866 [2024-07-25 07:33:03.253103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.866 [2024-07-25 07:33:03.253119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.866 [2024-07-25 07:33:03.253128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.866 [2024-07-25 07:33:03.253136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.866 [2024-07-25 07:33:03.263690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.866 qpair failed and we were unable to recover it. 00:27:30.866 [2024-07-25 07:33:03.273143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.866 [2024-07-25 07:33:03.273180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.866 [2024-07-25 07:33:03.273195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.866 [2024-07-25 07:33:03.273205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.866 [2024-07-25 07:33:03.273214] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.866 [2024-07-25 07:33:03.283336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.866 qpair failed and we were unable to recover it. 00:27:30.866 [2024-07-25 07:33:03.293161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.866 [2024-07-25 07:33:03.293198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.866 [2024-07-25 07:33:03.293215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.866 [2024-07-25 07:33:03.293224] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.866 [2024-07-25 07:33:03.293232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.866 [2024-07-25 07:33:03.303711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.866 qpair failed and we were unable to recover it. 00:27:30.866 [2024-07-25 07:33:03.313344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.866 [2024-07-25 07:33:03.313379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.866 [2024-07-25 07:33:03.313394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.866 [2024-07-25 07:33:03.313404] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.866 [2024-07-25 07:33:03.313415] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:27:30.866 [2024-07-25 07:33:03.323741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.866 qpair failed and we were unable to recover it. 00:27:30.866 [2024-07-25 07:33:03.323874] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:30.866 A controller has encountered a failure and is being reset. 00:27:30.866 [2024-07-25 07:33:03.323995] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:30.866 [2024-07-25 07:33:03.357228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:30.866 Controller properly reset. 00:27:31.125 Initializing NVMe Controllers 00:27:31.125 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.125 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:31.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:31.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:31.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:31.125 Initialization complete. Launching workers. 00:27:31.125 Starting thread on core 1 00:27:31.125 Starting thread on core 2 00:27:31.125 Starting thread on core 3 00:27:31.125 Starting thread on core 0 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:31.125 00:27:31.125 real 0m11.520s 00:27:31.125 user 0m25.255s 00:27:31.125 sys 0m2.956s 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.125 ************************************ 00:27:31.125 END TEST nvmf_target_disconnect_tc2 00:27:31.125 ************************************ 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:31.125 ************************************ 00:27:31.125 START TEST nvmf_target_disconnect_tc3 00:27:31.125 ************************************ 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2837446 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:31.125 07:33:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:31.125 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.028 07:33:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2836218 00:27:33.028 07:33:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Read completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.408 Write completed with error (sct=0, sc=8) 00:27:34.408 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Write completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Write completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Write completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 Read completed with error (sct=0, sc=8) 00:27:34.409 starting I/O failed 00:27:34.409 [2024-07-25 07:33:06.750297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.346 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2836218 Killed "${NVMF_APP[@]}" "$@" 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2838236 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2838236 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2838236 ']' 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.346 07:33:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.346 [2024-07-25 07:33:07.580679] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:35.346 [2024-07-25 07:33:07.580729] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.346 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.346 [2024-07-25 07:33:07.669104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.346 [2024-07-25 07:33:07.741499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.346 [2024-07-25 07:33:07.741540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.346 [2024-07-25 07:33:07.741550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.346 [2024-07-25 07:33:07.741558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.346 [2024-07-25 07:33:07.741582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.346 [2024-07-25 07:33:07.741697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:27:35.346 [2024-07-25 07:33:07.741806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:27:35.346 [2024-07-25 07:33:07.741913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:35.346 [2024-07-25 07:33:07.741914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Read completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 Write completed with error (sct=0, sc=8) 00:27:35.346 starting I/O failed 00:27:35.346 [2024-07-25 07:33:07.755377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.915 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.915 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:35.915 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.915 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.915 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 Malloc0 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 [2024-07-25 07:33:08.512071] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1be9e40/0x1bf5bc0) succeed. 00:27:36.175 [2024-07-25 07:33:08.521722] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1beb480/0x1c37250) succeed. 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 [2024-07-25 07:33:08.659916] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.175 07:33:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2837446 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Write completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 Read completed with error (sct=0, sc=8) 00:27:36.435 starting I/O failed 00:27:36.435 [2024-07-25 07:33:08.760431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.435 [2024-07-25 07:33:08.761955] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:36.435 [2024-07-25 07:33:08.761973] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:36.435 [2024-07-25 07:33:08.761982] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:37.373 [2024-07-25 07:33:09.765871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-07-25 07:33:09.767477] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:37.373 [2024-07-25 07:33:09.767495] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:37.373 [2024-07-25 07:33:09.767504] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:38.310 [2024-07-25 07:33:10.771312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.310 qpair failed and we were unable to recover it. 00:27:38.310 [2024-07-25 07:33:10.772848] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:38.310 [2024-07-25 07:33:10.772868] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:38.310 [2024-07-25 07:33:10.772876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:39.248 [2024-07-25 07:33:11.776767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.248 qpair failed and we were unable to recover it. 00:27:39.508 [2024-07-25 07:33:11.778266] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:39.508 [2024-07-25 07:33:11.778284] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:39.508 [2024-07-25 07:33:11.778293] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:40.445 [2024-07-25 07:33:12.782137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.445 qpair failed and we were unable to recover it. 00:27:40.445 [2024-07-25 07:33:12.783629] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:40.445 [2024-07-25 07:33:12.783647] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:40.445 [2024-07-25 07:33:12.783655] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:41.381 [2024-07-25 07:33:13.787649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.382 qpair failed and we were unable to recover it. 00:27:41.382 [2024-07-25 07:33:13.789085] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:41.382 [2024-07-25 07:33:13.789102] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:41.382 [2024-07-25 07:33:13.789110] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:42.319 [2024-07-25 07:33:14.792878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.319 qpair failed and we were unable to recover it. 00:27:42.319 [2024-07-25 07:33:14.794471] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:42.319 [2024-07-25 07:33:14.794488] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:42.319 [2024-07-25 07:33:14.794496] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:43.697 [2024-07-25 07:33:15.798387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.697 qpair failed and we were unable to recover it. 00:27:43.697 [2024-07-25 07:33:15.799909] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:43.697 [2024-07-25 07:33:15.799934] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:43.697 [2024-07-25 07:33:15.799943] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:44.634 [2024-07-25 07:33:16.803819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.634 qpair failed and we were unable to recover it. 00:27:44.634 [2024-07-25 07:33:16.805246] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:44.634 [2024-07-25 07:33:16.805263] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:44.634 [2024-07-25 07:33:16.805272] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:45.571 [2024-07-25 07:33:17.809071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:45.571 qpair failed and we were unable to recover it. 00:27:45.571 [2024-07-25 07:33:17.809174] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:45.571 A controller has encountered a failure and is being reset. 00:27:45.571 Resorting to new failover address 192.168.100.9 00:27:45.571 [2024-07-25 07:33:17.810929] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:45.571 [2024-07-25 07:33:17.810957] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:45.571 [2024-07-25 07:33:17.810970] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.506 [2024-07-25 07:33:18.814943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.506 qpair failed and we were unable to recover it. 00:27:46.506 [2024-07-25 07:33:18.816374] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:46.506 [2024-07-25 07:33:18.816391] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:46.506 [2024-07-25 07:33:18.816400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.506 [2024-07-25 07:33:19.820302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.506 qpair failed and we were unable to recover it. 00:27:47.506 [2024-07-25 07:33:19.820401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.506 [2024-07-25 07:33:19.820505] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:47.506 [2024-07-25 07:33:19.822815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:47.506 Controller properly reset. 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Write completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 Read completed with error (sct=0, sc=8) 00:27:48.442 starting I/O failed 00:27:48.442 [2024-07-25 07:33:20.866864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.442 Initializing NVMe Controllers 00:27:48.442 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.442 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:48.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:48.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:48.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:48.442 Initialization complete. Launching workers. 00:27:48.442 Starting thread on core 1 00:27:48.442 Starting thread on core 2 00:27:48.442 Starting thread on core 3 00:27:48.442 Starting thread on core 0 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:48.442 00:27:48.442 real 0m17.384s 00:27:48.442 user 0m55.500s 00:27:48.442 sys 0m5.689s 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.442 ************************************ 00:27:48.442 END TEST nvmf_target_disconnect_tc3 00:27:48.442 ************************************ 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.442 07:33:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:48.701 rmmod nvme_rdma 00:27:48.701 rmmod nvme_fabrics 00:27:48.701 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.701 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:48.701 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:48.701 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2838236 ']' 00:27:48.701 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2838236 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2838236 ']' 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2838236 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838236 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838236' 00:27:48.702 killing process with pid 2838236 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2838236 00:27:48.702 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2838236 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:48.961 00:27:48.961 real 0m39.215s 00:27:48.961 user 2m17.943s 00:27:48.961 sys 0m15.813s 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 ************************************ 00:27:48.961 END TEST nvmf_target_disconnect 00:27:48.961 ************************************ 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:48.961 00:27:48.961 real 5m41.398s 00:27:48.961 user 12m48.960s 00:27:48.961 sys 1m56.658s 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.961 07:33:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 ************************************ 00:27:48.961 END TEST nvmf_host 00:27:48.961 ************************************ 00:27:48.961 00:27:48.961 real 19m29.349s 00:27:48.961 user 44m18.656s 00:27:48.961 sys 6m13.483s 00:27:48.961 07:33:21 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.961 07:33:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 ************************************ 00:27:48.961 END TEST nvmf_rdma 00:27:48.961 ************************************ 00:27:48.961 07:33:21 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:48.961 07:33:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:48.961 07:33:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.961 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 ************************************ 00:27:49.221 START TEST spdkcli_nvmf_rdma 00:27:49.221 ************************************ 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:49.221 * Looking for test storage... 00:27:49.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2841094 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2841094 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 2841094 ']' 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.221 07:33:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 [2024-07-25 07:33:21.688513] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:49.221 [2024-07-25 07:33:21.688570] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841094 ] 00:27:49.221 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.480 [2024-07-25 07:33:21.772109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:49.480 [2024-07-25 07:33:21.846324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.480 [2024-07-25 07:33:21.846327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.048 07:33:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.030 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:00.031 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:00.031 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:00.031 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:00.031 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.031 07:33:30 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:00.031 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.031 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:00.031 altname enp217s0f0np0 00:28:00.031 altname ens818f0np0 00:28:00.031 inet 192.168.100.8/24 scope global mlx_0_0 00:28:00.031 valid_lft forever preferred_lft forever 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:00.031 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.031 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:00.031 altname enp217s0f1np1 00:28:00.031 altname ens818f1np1 00:28:00.031 inet 192.168.100.9/24 scope global mlx_0_1 00:28:00.031 valid_lft forever preferred_lft forever 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.031 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:00.032 192.168.100.9' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:00.032 192.168.100.9' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:00.032 192.168.100.9' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.032 07:33:31 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:00.032 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:00.032 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:00.032 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:00.032 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:00.032 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:00.032 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:00.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:00.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:00.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:00.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:00.032 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:00.032 ' 00:28:01.411 [2024-07-25 07:33:33.550269] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1935f20/0x19476f0) succeed. 00:28:01.411 [2024-07-25 07:33:33.559905] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19374c0/0x1988d80) succeed. 00:28:02.348 [2024-07-25 07:33:34.797651] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:28:04.883 [2024-07-25 07:33:36.972483] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:28:06.789 [2024-07-25 07:33:38.850673] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:08.168 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:08.168 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:08.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:08.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:08.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:08.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:08.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:08.169 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:28:08.169 07:33:40 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 07:33:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:08.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:08.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:08.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:08.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:08.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:08.428 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:08.428 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:08.428 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:08.428 ' 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:28:13.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:28:13.705 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:13.705 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:13.705 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2841094 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 2841094 ']' 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 2841094 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2841094 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2841094' 00:28:13.705 killing process with pid 2841094 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 2841094 00:28:13.705 07:33:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 2841094 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.705 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:13.705 rmmod nvme_rdma 00:28:13.964 rmmod nvme_fabrics 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:13.964 00:28:13.964 real 0m24.763s 00:28:13.964 user 0m52.711s 00:28:13.964 sys 0m7.492s 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.964 07:33:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:13.964 ************************************ 00:28:13.964 END TEST spdkcli_nvmf_rdma 00:28:13.964 ************************************ 00:28:13.964 07:33:46 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:13.964 07:33:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:13.964 07:33:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:13.964 07:33:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:13.964 07:33:46 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:13.964 07:33:46 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:13.964 07:33:46 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:13.964 07:33:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.964 07:33:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.964 07:33:46 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:13.964 07:33:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:13.964 07:33:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:13.964 07:33:46 -- common/autotest_common.sh@10 -- # set +x 00:28:20.585 INFO: APP EXITING 00:28:20.585 INFO: killing all VMs 00:28:20.585 INFO: killing vhost app 00:28:20.585 WARN: no vhost pid file found 00:28:20.585 INFO: EXIT DONE 00:28:23.120 Waiting for block devices as requested 00:28:23.120 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:23.120 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:23.120 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:23.379 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:23.379 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:23.379 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:23.379 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:23.639 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:23.639 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:23.639 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:23.639 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:23.898 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:23.898 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:23.898 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:24.158 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:24.158 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:24.158 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:28.355 Cleaning 00:28:28.355 Removing: /var/run/dpdk/spdk0/config 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:28.355 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:28.355 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:28.355 Removing: /var/run/dpdk/spdk1/config 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:28.355 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:28.355 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:28.355 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:28.355 Removing: /var/run/dpdk/spdk2/config 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:28.355 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:28.355 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:28.355 Removing: /var/run/dpdk/spdk3/config 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:28.355 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:28.355 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:28.355 Removing: /var/run/dpdk/spdk4/config 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:28.356 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:28.356 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:28.356 Removing: /dev/shm/bdevperf_trace.pid2552826 00:28:28.356 Removing: /dev/shm/bdevperf_trace.pid2745121 00:28:28.356 Removing: /dev/shm/bdev_svc_trace.1 00:28:28.356 Removing: /dev/shm/nvmf_trace.0 00:28:28.356 Removing: /dev/shm/spdk_tgt_trace.pid2504410 00:28:28.356 Removing: /var/run/dpdk/spdk0 00:28:28.356 Removing: /var/run/dpdk/spdk1 00:28:28.356 Removing: /var/run/dpdk/spdk2 00:28:28.356 Removing: /var/run/dpdk/spdk3 00:28:28.356 Removing: /var/run/dpdk/spdk4 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2501279 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2502860 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2504410 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2504942 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2506026 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2506304 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2507164 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2507428 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2507692 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2513383 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2514838 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2515160 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2515479 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2515834 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2516153 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2516434 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2516715 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2517021 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2517619 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2520772 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2521074 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2521424 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2521635 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2522202 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2522320 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2522863 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2523046 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2523354 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2523576 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2523669 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2523919 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2524317 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2524581 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2524905 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2529773 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2534633 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2546009 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2547120 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2552826 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2553311 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2558093 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2564902 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2567667 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2579609 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2608718 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2613162 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2662259 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2668277 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2674746 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2684646 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2743088 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2744118 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2745121 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2750222 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2758648 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2759708 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2760515 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2761567 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2761849 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2767086 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2767088 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2772371 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2772919 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2773573 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2774234 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2774364 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2780031 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2780680 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2785749 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2788582 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2795389 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2806589 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2806653 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2828246 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2828515 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2835110 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2835670 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2837446 00:28:28.356 Removing: /var/run/dpdk/spdk_pid2841094 00:28:28.356 Clean 00:28:28.356 07:34:00 -- common/autotest_common.sh@1451 -- # return 0 00:28:28.356 07:34:00 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:28:28.356 07:34:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.356 07:34:00 -- common/autotest_common.sh@10 -- # set +x 00:28:28.356 07:34:00 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:28:28.356 07:34:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.356 07:34:00 -- common/autotest_common.sh@10 -- # set +x 00:28:28.356 07:34:00 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:28.356 07:34:00 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:28.356 07:34:00 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:28.616 07:34:00 -- spdk/autotest.sh@395 -- # hash lcov 00:28:28.616 07:34:00 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:28.616 07:34:00 -- spdk/autotest.sh@397 -- # hostname 00:28:28.616 07:34:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:28.616 geninfo: WARNING: invalid characters removed from testname! 00:28:50.551 07:34:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:50.551 07:34:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:51.929 07:34:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:53.305 07:34:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:55.211 07:34:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:56.589 07:34:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:58.535 07:34:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:58.535 07:34:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:58.535 07:34:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:58.535 07:34:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.535 07:34:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.535 07:34:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.535 07:34:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.535 07:34:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.535 07:34:30 -- paths/export.sh@5 -- $ export PATH 00:28:58.535 07:34:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.535 07:34:30 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:28:58.535 07:34:30 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:58.535 07:34:30 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721885670.XXXXXX 00:28:58.535 07:34:30 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721885670.b94QUF 00:28:58.535 07:34:30 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:58.535 07:34:30 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:58.535 07:34:30 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:28:58.535 07:34:30 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:58.535 07:34:30 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:58.535 07:34:30 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:58.535 07:34:30 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:58.535 07:34:30 -- common/autotest_common.sh@10 -- $ set +x 00:28:58.535 07:34:30 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:28:58.535 07:34:30 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:58.535 07:34:30 -- pm/common@17 -- $ local monitor 00:28:58.535 07:34:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:58.535 07:34:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:58.535 07:34:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:58.535 07:34:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:58.535 07:34:30 -- pm/common@25 -- $ sleep 1 00:28:58.535 07:34:30 -- pm/common@21 -- $ date +%s 00:28:58.535 07:34:30 -- pm/common@21 -- $ date +%s 00:28:58.535 07:34:30 -- pm/common@21 -- $ date +%s 00:28:58.535 07:34:30 -- pm/common@21 -- $ date +%s 00:28:58.535 07:34:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885670 00:28:58.535 07:34:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885670 00:28:58.535 07:34:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885670 00:28:58.535 07:34:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885670 00:28:58.535 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885670_collect-vmstat.pm.log 00:28:58.535 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885670_collect-cpu-load.pm.log 00:28:58.535 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885670_collect-cpu-temp.pm.log 00:28:58.535 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885670_collect-bmc-pm.bmc.pm.log 00:28:59.474 07:34:31 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:59.474 07:34:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:28:59.474 07:34:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:59.474 07:34:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:59.474 07:34:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:59.474 07:34:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:59.474 07:34:31 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:59.474 07:34:31 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:59.474 07:34:31 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:59.474 07:34:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:59.474 07:34:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:59.474 07:34:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:59.474 07:34:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:59.474 07:34:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.474 07:34:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:59.474 07:34:31 -- pm/common@44 -- $ pid=2860009 00:28:59.474 07:34:31 -- pm/common@50 -- $ kill -TERM 2860009 00:28:59.474 07:34:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.474 07:34:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:59.474 07:34:31 -- pm/common@44 -- $ pid=2860010 00:28:59.474 07:34:31 -- pm/common@50 -- $ kill -TERM 2860010 00:28:59.474 07:34:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.474 07:34:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:59.474 07:34:31 -- pm/common@44 -- $ pid=2860012 00:28:59.474 07:34:31 -- pm/common@50 -- $ kill -TERM 2860012 00:28:59.474 07:34:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.474 07:34:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:59.474 07:34:31 -- pm/common@44 -- $ pid=2860034 00:28:59.474 07:34:31 -- pm/common@50 -- $ sudo -E kill -TERM 2860034 00:28:59.474 + [[ -n 2384350 ]] 00:28:59.475 + sudo kill 2384350 00:28:59.485 [Pipeline] } 00:28:59.508 [Pipeline] // stage 00:28:59.515 [Pipeline] } 00:28:59.533 [Pipeline] // timeout 00:28:59.538 [Pipeline] } 00:28:59.556 [Pipeline] // catchError 00:28:59.562 [Pipeline] } 00:28:59.579 [Pipeline] // wrap 00:28:59.589 [Pipeline] } 00:28:59.607 [Pipeline] // catchError 00:28:59.618 [Pipeline] stage 00:28:59.620 [Pipeline] { (Epilogue) 00:28:59.637 [Pipeline] catchError 00:28:59.638 [Pipeline] { 00:28:59.649 [Pipeline] echo 00:28:59.651 Cleanup processes 00:28:59.656 [Pipeline] sh 00:28:59.938 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:59.938 2860116 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:28:59.938 2860456 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:59.952 [Pipeline] sh 00:29:00.236 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:00.236 ++ grep -v 'sudo pgrep' 00:29:00.236 ++ awk '{print $1}' 00:29:00.236 + sudo kill -9 2860116 00:29:00.249 [Pipeline] sh 00:29:00.533 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:00.533 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:29:04.724 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:29:08.027 [Pipeline] sh 00:29:08.315 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:08.315 Artifacts sizes are good 00:29:08.331 [Pipeline] archiveArtifacts 00:29:08.338 Archiving artifacts 00:29:08.474 [Pipeline] sh 00:29:08.760 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:29:08.775 [Pipeline] cleanWs 00:29:08.784 [WS-CLEANUP] Deleting project workspace... 00:29:08.784 [WS-CLEANUP] Deferred wipeout is used... 00:29:08.790 [WS-CLEANUP] done 00:29:08.793 [Pipeline] } 00:29:08.814 [Pipeline] // catchError 00:29:08.827 [Pipeline] sh 00:29:09.108 + logger -p user.info -t JENKINS-CI 00:29:09.116 [Pipeline] } 00:29:09.131 [Pipeline] // stage 00:29:09.137 [Pipeline] } 00:29:09.153 [Pipeline] // node 00:29:09.158 [Pipeline] End of Pipeline 00:29:09.191 Finished: SUCCESS